 So thank you for getting up so early to attend this break out, so it's the 2.0 phase one and phase two different experience. This is the agenda today. So for the first 10 minutes is the opening and is to decide the question. And other 10 minutes we have Carl and Barry who will introduce the phase two use case based on their two solutions. And then we have 30 minutes for the discussion and then hopefully we will have a 5 minute break. And then we will give the introduction of the phase two different experience. Then we will have about 25 minutes to discuss deeper about the experience of phase two. And then the other 25 minutes is about that most people care about. And then we have the ending. So we have a collaborative note here and if you want to give some input here and taking notes please. And also if you have some questions that we might not have time during the break out and you can also note down here. And hopefully we will have time at last to answer them. And we encourage that we can have more conversations and discussion in this room. And so we only have two months today so if you have questions please brush your hand and maybe come up front here. We're going to empty some of these seats up here in front probably so that you can if you want to like engage in a slightly longer exchange then feel free to like grab one of these seats up front here. And when you are speaking at the beginning please tell us your name and yeah. Hi I'm Xiaowen and how about bra like that. And so we hope that will be a fun break out and but please be quiet and nice and I know you are all the great people here in this room. Okay yay break out. During the day I work for status during the night I dream about for steak and redness. So just quickly about what we're about here by now maybe you've heard this like five times already during this another like Easter thing but the idea is that these two will be delivered in these phases and each phase sort of represents a little feature set. I think a good reason for doing it this way is because we can live test components that we release like not the whole thing at the same time but we will start with the beacon chain and the beacon chain you could say that it's a simple component in light of everything that will be released as these two. It's a single piece of software it's a single chain and it starts by adding two crucial features you could say the proof of state which will give us security and replace proof of work and then the randomness that sort of first of all yeah gives legitimacy to that security and second of all perhaps is useful if you're into like animals. The other thing we're doing is that we're kind of shooting an arrow from east one to east two you know when you're building a bridge over a canyon like that's what you do and it flies over right this is the one way picking from east one to east two so when you want to become a validator on these two we have this bridge that sort of connects the two chains in this direction so that we can keep track of what's happening in terms of deposits in terms of people wanting to put a stake and we're launching this so that we can validate these things before developers start using the chain right so it's intended really for enthusiasts and people that want to stay early and want to learn how to run the system they want to learn how the validator time software works and want to set up the infrastructure so that we have time to sort of do this without the pressure of having that developers on the chain. Then we'll move into phase one and phase one is sort of the first time that we had a bit of complexity to the system we had shards how many we would see maybe 64 maybe a thousand and it's all about starting to prepare for execution but still not doing it so we'll have a couple of examples of what you could possibly do when you have shards on onto which you can put data and where the chain guarantees that this data will stay around for a while so if you look at how that works we have the beacon chain that is coordinating the data chains post data on your chain on your on your on your chart chain this gets recorded on the beacon chain regularly and it's a way of communication basically so within a single chart you already have the data available and then you cross link it so that it's available on the other shards as well in terms of security but you could say that each chart lives its own little life and then finally in phase two we had the execution engine simple or not or maybe we just but conceptually that is a separate piece of infrastructure that will enable people to develop execution engines now we had a long conversations about like present this to people yesterday like one of the ways that we came up with it is like up to phase one that's kind of like when you design the computer and the new computer it suddenly is a multiple multicore processing unit and then phase two is kind of like somewhere between the operating system and the drivers and per bar it's still not something that we expect everybody to do if you're that developer you're probably not going to release an EE if you're designing a large contract system with many moving parts then yes maybe maybe you have like the special need for it you're designing the EVM replacement for Ethereum too yeah then you will design the need if maybe you want to integrate zero-knowledge groups or let's say that we want to pin something like Zcash inside of zero-one then maybe you want to develop an EE for that but these are like they're not Bob's bad or something that you just deployed like they don't like to be fairly costly both in terms of development and in terms of deployment of the chain within that little world they might offer application development as we know it on the Ethereum one today. Start with two presentations that will introduce sort of phase one and what you would potentially do when you have phase one available with battery well I'll just give a quick introduction to ZK-Rollup and then try to talk about the physical and these are two kind of scalability solutions that we might use on these two. So the way that ZK-Rollup works is that we have a zero-knowledge proofs that we use to compress the state transition so the state transition includes the signature verification and the Merkle free updates because we do this inside the zero-knowledge proof we have a you know that our system cannot enter illegal states so if someone wants to steal money from someone else they can't do it unless they have a valid signature or that's better said by saying they can't create a zero-knowledge proof unless they have someone's signature. So when we create the zero-knowledge proof we have like this implicit proof that you have an implicit proof that everyone's given implicit proof that the signature exists and that a proof or a valid signature. So inside the system we do that's when I say oh shit okay that's alright I said one bunch of slides and she always said no this is too complicated do you think? That's complicated. So like these are the complicated slides but no. So if you see here this is our database inside the snag. So we have a Merkle route and this is everything that we keep on check. And we use the snag to update this Merkle route. And you see at the bottom of the chain you see these A, B, C, and D. This is like the each account, except for A, B, C, and D. Each account has a public key associated with it as well as the balance of the match. So B is equal to the hash of all these states. So you have a balance of token and a public key. So we use the snag to update this. So this is like you have a list of transactions here. This transaction is in the zero not proof and this red box is like what's happening with the zero not proof. And here is what's happening with the smart contract. So we have state one or Merkle route zero comes in here. And then we make a proof that updates it to Merkle route one. So we put the proof on chain as well as we, yeah. So we put the proof on chain but we have a problem. It's like this, the availability problem. So this is like the availability problem that if someone updates a Merkle route you're not able to calculate the next Merkle route. So you're not able to sort of update the system because we want to have multiple people who are able to do state transitions. So what we do is we also reveal a DIP. Yeah. So the DIP is basically if I have Alice and Deborah and Deborah says we're talking from Alice to Deborah. So if Alice says we're talking to Deborah, we, the DIP is basically the two address, the from address, the from address, the to address and the amount. Yeah. And that's enough to reconstruct the whole tree. So it turns out that this is pretty, pretty compressible because we just use the indexes in the Merkle tree. So this becomes very succinct for Brenda on the bottom. Yeah. So we put the SNARC and the, and the like DIP between the two states. Okay. Do you ask a question or? Yes, sir. Would that not remove the privacy? Yes, this completely removes privacy. Yeah. So, so we use SNARCs here for their succinctness. So let me go through like an idea of how this works. So, okay. So inside the SNARC, what we do exactly is that when we receive a transaction, we validate the signature. And then we prove that the public key of that signature is equal to the public key of Alice's leaf. Then we, then we prove that Alice's leaf is in the current Merkle tree. Yeah. Then we update Alice's leaf. And we use the same Merkle path to include that new leaf into the, to update the Merkle leaf. Yeah. So, so what's happening here is, is we're changing one leaf in the tree. And because we use the same Merkle path, we hold all the rest of the tree the same. Right. So then we update Alice's leaves by removing the, by removing the, but we reduce our balance. And we update, we come up with like an intermediate route. And then we have some money that we've taken away from Alice and we want to give it to Deborah. So we use the signature and we look up the, the two leaf, the leaf that's supposed to receive it. And we prove that that leaf is in the updated Merkle tree. And then we add the money to that leaf. And then we do another Merkle proof, holding the other Merkle path constant. Yeah, that's a whole lot of money, but it leaves the same. Okay. So that's basically how the SNARC works. And then we can also do this for like a global state or different kind of state transitions. So instead of, in the previous example, we had like Alice and Deborah, and both of them had like this constant, they both had like a personal state, but there was no idea of this global state. So if you want to do something like Uniswap, you need to have a global state that people are able to update on in certain conditions. So that's kind of, that's something else that we can do. We can come up with this global state and we can kind of store it and update it. So then we can kind of apply these role of patterns to other things. Yeah, so that's, is that enough? Okay. That's a good question. Yeah, I was just going to say, could you say again what data you're putting on chain to achieve the data availability? You said it was the diff between the two states? Yeah, so it's the minimal data you need to reconstruct the state. And for the token funds, for example, that minimal data, the two, the fraud, the two address, the fraud address and the amount. So it's the only ones that were touched in the new state groups? In the new, in that state transition, yes. Okay. Yeah, but it's every time they were touched. So for example, if I had said two address... The intermediate versions. Yeah. Why is the data availability guarantee of each one, or phase two, sorry, phase one important in this case? Okay, so what we can do in this case is we can... So at the moment we can put all the, we have to put all the data on chain and this is kind of expensive. So what we do is we put it on chain, it's one, and as we get to like 500 transactions a second, then when we have the new state, the new... Therefore, we will be able to... around 2,000 transactions a second. But if we had things... phase one, we could theoretically put this data on the chart and then just look at the data. What was my project? People publish transfer at the same time. One of them wouldn't validate the other, right? So we don't have to state. So in this case we have, okay, so we have like two roles. So the question is, I mean, no, this is correct. What happens if two people publish a transaction at the same time? Will it be some sort of state of conflict? Okay, so in that role we have these kind of two states. We have the users and we have this kind of aggregator called the coordinator. And the coordinator receives a bunch of transactions and they process them to make a mark of proof. No, they process them to make a mark of proof and they put that on chain. Okay, so if it was two of these coordinators at the same time, there would be ways to work. So we use a single leader. We use a leader and that can be processed to select them. One example of what we could do is we could do like a mistake and randomly select them and we have some other ideas of possibly better ways to do that. Is it possible to separate the logic for the token transfers and the logic that verifies the mark of proofs of the beliefs from the Snark circuit? Is it possible to put the mark of proofs and the signature verifications in separate things? Well, that could be kept. Maybe the signature verification is not in the Snark circuit and only the mark of proofs are in them verifying that the leaves are the tree roof. Okay, this is possible, but it would likely have to overhead. So for example, if you did this, you would have to come up with a list. Okay, so you verify this signature somewhere and then you have to have a list of the correct state transitions and you pass those to the Snark. Well, I guess we already do that because we passed the dip to the Snark. Yeah, so I guess you could do that. Do you have a case where that would be? Just a case too, if you have some complicated business logic or whatever for the token transfers, then you don't have to write that in a Snark circuit and it can be instead in just a regular contract logic. So you would validate these signatures inside the Snark and then reduce their transition? Yeah, I knew it would take advantage of the Snark proofs to reduce the data usage unit to have an EE and a state list and then you don't want to have to pass all this Merkle proof data and instead you compress that into a Snark. Let me think a little bit more about that. Next up we have Carl. Well, that was a great transition. One of Casey's kind of question, can you just use a Snark for the Merkle proof and then calculate the actual state transition somewhere else? Well, in fact you can and one of those places is with a dispute game. Okay, so I'm going to talk about Optimistic Rollup. How many people in this room have heard of Optimistic Rollup? Okay, medium, that's pretty good. How many people think they understand Optimistic Rollup? Okay, that's a lot fewer, that's unfortunate, but it's a fortunate, actually no, it's a fortunate thing because now this is maximal impact. Now, Optimistic Rollup is yet another way to scale Ethereum. It works in ETH1, it works in ETH2 Phase 1. It is one of these schemes where you embed a blockchain inside of Ethereum. And ZK Rollup, the way that Barry was just talking about it, you can kind of think of it as here is the Ethereum blocks. Here's our Ethereum and here's our blacks and here's our chain. And ZK Rollup, what Barry was talking about, is kind of putting these other blocks, these ZK R, these ZK Rollup blocks inside of these guys and building up its own chain which has its own properties. And so it turns out doing this can actually be more efficient and that is because of one very cool thing and that is that the computation is not expanded with Moore's law, I really like this, but bandwidth is. So it turns out that data availability is much cheaper and much more scalable than computation. So what we're doing is we're separating the kind of communication, message passing logic from the actual hard computation that all of the nodes are doing. And so this is ZK Rollup, ZK Rollup, you're building up this chain inside of this other chain and it's really good because you can prove upfront succinctly with zero-knowledge proofs that each one of these state transitions, each one of these commitments are correct. That is a very, very nice property. However, you don't get it for free because zero-knowledge proofs are currently and this may not be the case forever, but for the next five years or so, we're not going to see easy to build general purpose zero-knowledge proofs. But we can still use the Rollup scheme, but we can use an optimistic scheme instead. So optimistic Rollup, here's what we do. We say, okay, we're going to put blocks, you know, Ethereum style blocks inside of Ethereum and we're going to commit them, but we're not going to validate them. So we're going to commit all of the transaction data, we're going to commit, you know, state routes, you know, it's okay if you don't know what a state route is, but, you know, committing transactions, state routes, all the information you need to compute the block, you're committing it, right? And you're not computing it. That's a very, that's kind of a strange thing. The consensus-forming nodes of Ethereum are not computing it, but an off-chain, as an off-chain user, as a, you know, a company, as someone, you can compute it and check locally whether or not those state routes that this computation was done correctly. But because we are doing this thing where we're not computing it up front, it's possible that something invalid gets through. And that's a real problem, but it's not the worst problem because anyone who computes this can prove immediately without a single, you know, it's not an interactive protocol, it's just going to the main chain, saying this is wrong. And then what we can do is we can say, we committed this block, then in the next block, right, this is maybe a little more clear, hopefully, in this block, so here we committed this block right here, boop, and then in this one we're like, oh, there's some fraud. And so we raise a red flag, and we delete this block, and then in the next block, we build another block on our chain, starting from the last valid block. Turns out that doing this very, like, relatively simple scheme, we actually get pretty significant scaling benefits. So the scale that Barry was talking about, we get that as well in Optimistic Roller. And the benefit is we get that with general-purpose computation, in that we can build an EVM, you know, a real, you know, like a normal, quote, normal, solidity developer experience inside of Ethereum with better scaling properties using this technique. So this is a really, really nice thing, and it actually borrows a lot of, you know, kind of inspiration and, you know, this is very similar to the stuff that's going on in ETH2 with all of the sharding and data availability separation, and it all plays really nicely, and once we do have ETH2, then we'll just have way more ability to kind of post data. So it just kind of scales us up pretty significantly, maybe linearly with the number of shards. So I think that's kind of a high-level overview. The benefits are scale, the downsides are, or the downsides, I don't know. There are no downsides. It's one of those things, right? Incredible, incredible technology. What's the question? What's the size of the front proofs and does it scale with size of data Great question. So basically what you can think of the front proofs, you need to commit intermediate state routes frequently enough that you can evaluate that full transition in one block. So you say, like, okay, we're going to cap it at 2 million gas internally, and so we basically can only do 2 million gas versus computation. So you do the full state transition in the case of fraud? And are there timing assumptions on when you have to get this fraud group in? Great question. So interestingly this is almost like a side-chain inside of Ethereum. It's kind of like a weird, it's somehow the exit procedure is a little bit reminiscent of just a normal bridge where essentially what we do is we say for one, we don't actually need a finality period baked in. Like that's kind of something that you could add on, you could just say we'll refer infinitely far back. But that's not really that realistic considering on the main chain you might want to say, you know, submit a transaction which then deposits some amount of ETH, right? And we basically want the ability to then withdraw that ETH eventually, and to withdraw the ETH, we need to have this line of assumption that you're talking about. And so, because I decided it's one week. It's one week. That's exactly the time that it will take to find the fraud. Any questions? So fraud group goes in and we now build on it basically a different route because we're skipping that transaction. How do you do that without halting the chain? Let's say it comes one week later and you have to basically rebuild the chain or other validators or sequences are just not going to build on top of this because they technically know that it's wrong. I guess, what is the real process look like and what happens if the fraud group doesn't go up until four days later? Great question. So technically, I'm doing some simplifications here. Realistically, there are two things that are happening. There's this kind of log of transactions. These are all the different transactions. Tons of transactions. And then here's the log of state routes. And here's a bunch of state routes. And so you basically use some set of these transactions to generate one of these state routes. And then you use another set and you generate another state route. And so if this guy is invalid then what would happen is we would have to basically reprocess all these transactions based on reprocess all the transactions after this state route. So once we challenge it, we delete the state route, we would not delete the transactions, and then we just reprocess everything. But it does require a rework and if it gets in four days ago and you know, you bought some ETH or something or you bought some token and it's dependent on an invalid state route then yes, you may not have your money, but that's why someone has to check. That's a second question. What kind of consensus and what kind of criteria do you know? And if there's a fault supposedly, maybe you kind of find out how that leads to the liability. So in that case, it's being reworked and how this route works. Do you mean a fraud here or a fraud, I mean a fraud, sorry, a role of chain fraud. Okay, so Ethereum thankfully gives us a total ordering of transactions, which means that there's a deterministic way to determine the fork. And so really you can think of this chain as literally a list of blocks, like the data structure that we use is just an array. And so realistically, all you're doing when you say like have a fork, you're just like destroying one of the elements and then reinserting a new element. So this isn't like a really forkful thing because we don't need a forkful thing because we have this deterministic warning from Ethereum. Yes, I'm wondering what is the effect that what is the impact if someone falsely yells fraud and what is the kind of deal doing so? Oh yes, thank you. What is the impact, sorry? What is the impact I cannot do that. What is the impact of, wait, sorry, I forgot the question. If an attacker falsely claims to order a block and what is the penalty of them doing so? Great. So this is actually related to a question that I didn't answer from you also. I forgot to mention the first thing. So the question is what is the impact of fraud and the question that I forgot was what does it take to submit one of these blocks, I believe? So you need a bond, right? There's no question that we need a bond. So essentially, when you submit a block there needs to be some money at stake that can be, you know, slashed. And so what we do is we burn most of it and give some to the rewarder. Is that not your question? What was your question? No. If I falsely claim that something is a fraud. Oh! You just lose your gas. It's great. Because it's all deterministic, right? We're not playing this interactive game. You submit an invalid fraud proof and it literally just reverts. Like, you know, Ethereum up-go, reverts. So until you've checked the fraud proof is correct. So, like, it's smart contract. Right, the fraud proof is full execution of blocks, so it's just spending too many gas, which kind of sucks. And you don't deterministic, so why would you do it? I have a question. So because you have a period of maximum reversion to the week, and you have a total gas limit on chain, you have a theoretical mass amount of these chains that you can securely support on Ethereum correct? Great question. So, you, okay, I don't know if I, if I... So say you can only revert, like, say the revert period is one block. Like, you can only revert within one block. So you have like 10 million gas to play with. And so you can't support OR chains that sum to you can't securely support OR the sum of OR chains that sum to more than 10 million gas because within the one block, the revert period if everyone was fraudulent, you couldn't you couldn't support rollbacks of all of them. So you can extend that to, like, think about the week long of those. I'm not I'm not sure if I understand, but I don't believe that we have this problem because you, like, even, are you, are you saying, like, if this is inbound, right? Like, the way that I prove it is I just prove this is invalid and then I just delete everything here. Right. But if you... I know you're saying, you're saying that if you have a block on your side chain that takes more computation than you can do on the main chain, there's no way... No, but if you have a sum to chains, you all individually think you're secure. But then there's not enough gas on chain to revert time. So you can solve this in two ways. A snark. No. So what you can... this is also the chain to the side, they can have, like, a higher bond at a longer timeout period. So if you have a bigger bond, you need to have a shorter timeout period. So this is like a... we can solve this by having variable bond like our timeouts and variable bonds. Because if you think about it, I'll get more money if I slash someone on a chain that's a bigger reward. So I'm incentivized to have a higher gas price. Does that make any difference? Well, that's all for today. I'm saying, and we can talk about this outside of this, but if you have... so you have, like, five chains and they all have a max of two million gas and they can all revert. They have one block with which they can do a challenge to revert. So that they can all fit into that block on chain and, like, do their reverts. But if you now have six chains off chain and now it's 12 million and you have one block to be able to do the actual reversion and usually you have much more time period. So I think you can't do that. I don't think you can because even if they're all submitting blocks and, like, let's say they're all of the invalid blocks are progressing, like one honest user is going to eventually, before one week, submit one transaction to delete all of those blocks in one go. He has, like, a contrived example where there's, like, one day and there's just one block. He's given the example up because it's definitely not okay. That's definitely not okay. So there's probably, within that week, there's also a maximum that you can support. A maximum number of versions you can support. Thus, a maximum number of chains you can securely support. All hail the one optimistic role of chain. I think, yeah, parameterization will help here. I think during this conversation two important things came up. The guarantees that are useful for these kind of setups and that phase one offers is first of all that you get a deterministic ordering of blocks, right? And all the computation happens off-chain. But that's the critical feature that phase one offers. So if in terms of if you want to develop your own application, this is, like, one of the guarantees that you suddenly gain now. And then the theorem one already has that, obviously. The second feature that we gain here that was a little bit increased bandwidth, right? So there's suddenly 64 or a thousand charts that you can submit front-proofs for example. So you're not as likely to end up with a stopped chain. Right? I'll just be highlighting these little points along the way, like how you can think about phase one and what kind of applications you could develop in it. That's great. Question is from the period survey that some, like many of the the others, they feel like what's the bottleneck presented by migrating to their two things that they are worried about. The first one is the it's not decentralized enough and the second one is they don't think there is not security and what do you feel about the security assumption of your, yeah, solutions? So I think that, like what Dany was getting at is kind of close to a nice differentiating factor between the two systems. So with the DK rule, you fail, when you have too many transactions, when you have too many transactions, you just can't put any more. But with optimistic roll-up, if you have too much throughput, then you have this kind of situation like you were describing where you don't have enough block space, but instead of just not being able to fill it all you have this kind of weird condition where someone might be able to attack if it's like a coordinated attack against every optimistic roll-up. However, the economics of running that attack would be kind of expensive because you'd have to burn on all but one chain and you could attack one chain and you'd have to burn all but one. So it depends on the size of the bonds that you have and the time out periods. But yeah, that's one of the data. Okay. One, I think that generally people who say, this is a great layer two a lot of the times are not really like telling you about a secure protocol and so I think that it's a very well-founded kind of fear. It's like, oh, this is a great layer two. It's totally secure and they're just kind of hand-waving half the design space. So I think that that is absolutely correct. And I also don't think that this kind of like block congestion problem for generally layer two, because this is a problem of state channels of every layer two that's based on the dispute period. If you fill up the blocks, then you're kind of out of luck. However, I want to just really, really quickly talk about the kind of asymmetry in the attack defense scheme here that makes this dispute you know, this dispute like censorship problem really, really hard to achieve. So if I have some value of, you know, let's say, you know, a thousand right, that's locked up in a dispute that I'm scared of losing, right, then I'm willing to send in a transaction and like a theory of transaction that pays like nine, nine, nine, you know, like contrived but nine, nine, nine gas to get it out, right. I'm willing to burn a lot of money to, you know, kind of save this because I'll be making a profit otherwise I'm going to lose it. So at worst, I'm trying to get about one dollar. But an attacker has to fill up all the blocks, right, for some time period of like, you know, one week and they have to be above my maximum price the entire time, right, or it's just like a total censorship of Ethereum and then we're like at Ethereum security. So like this means that what I'm willing to spend one time they have to spend continuously for the entire censorship period which is, you know, I think a reasonable attempt to fence them out. What if, what if the boy of Pons is deemed during that one period? Yes, so we definitely want layer one to scale. This is not that we like that's what you're looking for. Yes. Oh, sorry, where? In the back? All right, in the corner. Just speak up. So that you can consider a broad group that can run within a single block. Is there any way of enforcing that? I mean, if I'm a bad operator can I just create a massive block with two coordinates? Just QR for anyone to produce a broad group? Yeah, that's a great question. It's really nice because it's enforced in Ethereum. We just run the block and if the block runs out of gas then it's a fraud. Can you explain again why you're saying that the attacker has to pay for the full week? Because I thought that it would be me and you and whoever wants to submit our fraud proofs against the attacker if they would be paying to try and get all our fraud groups in. Whoever is paying for the fraud proof, the fraud proof is going to be like this and then the attacker needs to fill blocks that are, you know, paved gas above whatever that is for the entire week. I thought the attacker, all he would have to do is submit a bond in valid rules that we all have to then pay gas to get our blocks to pay for the full week. I think, yeah, probably good to take it offline, but... I think we're going to set the explicit discussion time. Like, why me? Is that before the break? Okay. Thank you. So, do I win? Go ahead. No, I just ask for more. So after people knowing the base solutions here, how many of do you all here are interested in making your depth above their two solutions? Oh, no. Where are we at with the state of actually being able to use these as adaptive algorithms? So with ZKRollup, we're still kind of focused on the systems as a first proof of concept. I think that that makes a lot of sense to... I think that we have a responsibility to prove to people that this is secure, at least in a limited version before we start to kind of try and push more people to build on it or support them. So that's kind of what we're working on to just do a proof of concept. Or, like, not a proof of concept, but make a real thing that actually works and stays stable for a while. So the only thing that we need to make like ZKRollup and we can kind of explore from there. This is also a really good this is also a really good time that if you have an idea for a kind of application of phase one, you could now also ask if, like, these people that have gone through the motions think that it could work with the properties that you get on phase ones. Like, use the time. So just here I have a question like who's going to make a solution built on this? And then we're really a whole lot of hands raised and listening to this, I get a lot of cognitive fatigue because it seems like a deep technical dive into things that aren't confirmed yet on the phase one. So for example, it reminds me a little bit of when I was trying to understand plasma. And so I spent a tremendous amount of cognitive load trying to understand plasma as a developer and it sort of feels like it's for not now. So in this discussion, it's very, very interesting. However, as a developer I have a certain amount of bandwidth for this sort of low level vertical understanding. And so I think maybe if you could just sort of say the stuff that we talked about in the past half hour how much of this is locked in and confirmed that it won't change in terms of these specs you've just discussed now. Okay, so the question is the question is what do I care about as a developer? I can also just sort of say how much of this is for sure not going to change in the next three months? How much of this is for sure not going to change in three months? So like the kind of I think the kind of strategy we need to come up and other people who work on this don't have as well is to try and build things that will work on there too and will also work on these too. So while my work is all focused towards there too, of each one just because I have the same concerns as you I don't want to build something that's going to change. So I don't know maybe we can check with Danny about it. I think it more specifically means regardless of which data layer you use, availability like is it worth actually learning ZK Rollup? Like is the notion of this protocol going to drastically change in four months such that the cognitive overhead you might spend today to learn it and begin thinking about how to develop on it like might just be thrown out the window in four months? This is a great question and I think that people should not really have to learn this at all in terms of answering it on face value I would say like a solid 90% 95% of what I talked about here is just this is literally how optimistic Rollup works and we already can get smart contracts on this I think that the big problem with layer 2 thus far has been we have been talking about protocols, incentive reasoning and like expecting developers to actually have to learn about how these different parties are going to interact in these different worlds and how they're going to play out some dispute game. I don't want people to think about dispute games. I just want people to write smart contracts because that's what got me into Ethereum personally and so this is the entire reason why I'm interested in optimistic Rollup is it's the fastest way to start scaling and improving the user experience of solidity smart contracts I think it's an interesting observation as well. At the beginning I alluded to that we're fairly early stages yet it was also to raise awareness that in entering phase 1 or even 2 discussions right now you need to get fairly deep into it like if you want to develop an execution that's fairly deep that's a bit of work however we've already torn out a couple of these simple properties that you can already start imagining what kind of solutions you could build on this if you were to do that right? lazy ledger I mean I just really like it because it seems like a nice separation of concerns lazy ledger is like a minimalistic data availability engine and says okay we're going to provide availability and ordering for transactions but we are not going to necessarily do a execution thing so people can essentially execute based on the data that is provided by the lazy ledger how is it is it a ratio encoding and it's like a pretty nice simple scheme are people staked or do you get stashed or different work? I think it's staked I think it's staked okay so in general I think that solutions like that require you to lock up as much capital a lot of capital more capital than you can transmit in your system because otherwise someone can just disappear the data and you can't update your system so like an example of this going to its logical conclusion is you have the whole world's transact economy is built upon this lazy ledger application well no half of the world's capital is on lazy ledger and the other half is on depositors on a lot of lazy ledger so how is it different than ease 2? so ease 2 is okay alright so here's the difference here's the difference so ease 2 if there is a data availability failure you roll back the chain and you recover from the failure so you roll back the chain plus you roll back the execution engines but if lazy ledger has that because there's not this type coupling between the execution engine and the data availability engine you won't be able to roll back you roll back the lazy ledger but your execution engine you'll have to have a hard work yeah yeah to that if you're talking about phase 1 on ease 2 as a data availability engine without execution and this rollback is happening in an ease 1 contract with execution we're going to take a short break I think phase 1 gives you data availability right it's like a strong guarantee that the data that you get is ordered and stays there and you do it with the latest but without the execution cool so this is to give an overview so you guys have a general understanding of phase 2 and then we'll dive into the DevX discussions so yeah I gave a talk on day 1 this is kind of going to be similar to that but I'm going to breeze over a lot more and we'll go through this a bit quicker we talked about phase 2 already so I don't feel like I need to cover this yeah I'll stick to it so big of chains kind of this organizing layer manages across things the finality of each of the shards so and then phase 1 we already just talked about this shards is data availability layers so we get that on phase 2 this is where I say it brings the shards to life even more because now they have state execution on them so now we can do a computation on the layer one critical so to kind of give you guys an overview so to help you understand the new I guess framework by which phase 2 is being proposed it's all about these things called execution environments and so I'm going to give you an example from e to 1 and hopefully that extends your mental model appropriately so in e to 1 we can say that there is one execution framework one execution environment and it's just enshrined into the core e to 1 protocol it's hard coded it's hard coded into the nodes so if you want to change it you have to work and so a really good example is a transaction in e to 1 has non-scas price gas limit 2 value data all these fields uses RLP encoding and if you want to change any of this if you want to add a field you want to adjust some other encoding schema the way to change that is you have to change the core protocol and so you have to do a fork and change the code similarly in e to 1 the global state is managed by a Patricia and each a cow or each leaf kind of has these fields non-spound storage root and code hash so if you want to change that as well as the account structure if you want to add a field use a different accumulator instead of a Patricia or anything like that again it's the same thing we would have to do before on e to 1 to change that and so to review this basically in e to 1 we just have one transaction framework we have one execution framework which is essentially hard-coded into this system so e to kind of takes more of a radical shift and it says you know the core consensus doesn't really need to have a strong hard-coded opinion on transaction structure the consensus layer is good at managing a lot of overhead logic this is ordering blocks 4th choice rules execution tooling slashing rewards, validators, different things like that and so it takes kind of this radical shift that the core protocol doesn't need to actually have a strong opinion on what a transaction structure looks like or how an account is organized and what fields are on the account in fact you can support multiple in the same all in the same chart and so this is the radical shift that we no longer just have one enshrined hard-coded transaction system or account framework and so this is kind of a goofy little diagram but I think it helps people understand so on a shard in e to 2 you can have multiple execution environments so example we're going to bring e to 1 into a shard e to 1 itself would be an execution environment so here we're saying this is just all in one shard right now we're not looking at the systems in one block and one shard we could have e to 1 running we could have an e to 2 which I'll talk about in a second which is an iterated account model we could have a UTXO like Bitcoin model we could have a ZK roll-ups focused execution environment we could have Libra move running on e to 2 so but we could just to kind of give the illustration here so within each of these EEs you get this synchronous communication within yourself so in the e to 1 e nothing really changes as far as transactions or contracts being able to call out a contract synchronously but then you have to think about a synchronous communication when you hop from one e to another so the e to 1 e is going to call the UTXO or the e to account model or the Libra so e is execution environment like I said in the Libra you could have a lot of things you could put a passable interpreter if you want to there's a lot you can do there's a lot of flexibility so diving deeper what does this actually look like so the beacon chain stores these peer reducing functions and so these give the rules by which transactions will run so this gives the rules by which transactions are validated and it balance what the actual state transition is for that EE and so this is just a pure wasm wasm reducing function and so to give an example in e to 1 you could kind of consider you have a state you have a function that runs that function has a bunch of transaction data and the output is you have a new state state prime and e to we do have a stateless model so we can say this is actually pure because now we just have a pre-state hash reducing function and it produces a post state hash so an example here each shard if there's a transaction that is e to 1 related it pulls down the wasm pure reducing function from the beacon chain and then executes this package of transactions for the e to 1 EE appropriately and then the same block can do it for an account model and I'll talk about that in a second or for a UTXO model or another account model so we're already kind of playing with this so Alex and the wasm team created a repo called scalp which already lets you do prototyping on these execution environments and again this is what the function would look like it's process block, takes block data, pre-state root and it gives you a post state root hash so again it's exactly this so there's something here and maybe this is what we talked about today so execution environments do not necessarily analogous to smart contracts they could be so for example an execution environment of e to 1 EE right, defines an execution environment where like more code can run so smart contracts can kind of run in this world of the e to 1 EE so it defines how continued code can be executed with or how continued contracts can be executed so it can define wasm running or wasm with I think this is an interesting quote from Drost with EE's e to 2 extracts that fix execution semantics of almost any conceivable programmable blockchain in a similar way to help e to 1 extract to fix semantics of digital tokens currencies in the early blockchain Bitcoin era so I think this is cool so the e to 2 EE what we talked about and this is still under debate because there's some other perspectives there's this umbrella model or a non umbrella model we can go into that in a bit so the e to 2 EE is saying you know we're trying to do a lot to make the e to 1 compatible and bring that over into as an execution environment and into e to 2 but we've realized there are a lot of better accumulator formats that we can use there are a lot better models that we can follow there are a lot of things that we learned and so we can make another EE that is focused and generic on running smart contracts like e to 1 that's also an account model but we can make it even better and we can make it more wasm centric and so this is the concept of an e to 2 EE I think e wasm this is stay execution happens within the consensus of each shard everything is all the execution is centered around e wasm e wasm provides metering and concept of gas limits and yeah skip that for now so this system is stateless I don't know if we need to really dive into that too much during today's session in case you're curious so that means there are actors in the system that hold state and so there's a lot of discussion discussion on this the best example that I can give for a stateless system is with your transaction you provide the database so if I'm submitting a transaction where we'll fill in a way that is transferring 5e to Xiaowei so I need to prove what my account currently looks like because I use a Markle proof to do that and I need to prove what her account looks like as well in this case I show that I have 6e and Xiaowei has 3e and so now when the block producer executes the transaction it basically just pulls from the proof that I gave it pulls from the submitted database that I gave with my transaction so that way validators, block producers they don't need to know about the state they can just pull from this database that comes with each of your transactions switch over Vitalik did a really cool post on this just recently so this is where we're bringing ETH1 into ETH2 as an execution environment this means that we need the EVM built in WebAssembly and this would be defined in that EE script that we talked about Hugo on that EWASM team has been doing really good work not as well but there's a lot of work to do here it still needs to be prototype it still needs to be established ETH2 talked about this this is kind of an iterated version of the contract framework it wasn't centric and so then we'll go into cross chat transactions so there's kind of three spaces here that it falls under and I think today we'll probably just focus on the first one the asynchronous core protocol so basically that's you wait until finality once you get finality for another shard then now you can trust the message that comes from that shard so that's typically would be considered slower but it's secure and safe and we'll never reorganize there is a new proposal that Vitalik has made which limits so cross chat transactions now would only happen in one block so that's actually really cool and that's somewhat bearable so I think before finality would take about 6 to 18 minutes having one block is 6 seconds to make a call to another shard that's really cool always with every proposal that you read there's always going to be trade-offs pros and cons that come with it then there's this whole model of optimistic system so these are like dependency graphs that kind of reorg themselves with this new proposal we wouldn't need those anymore but these do the idea is you can use basically almost these like hybrid systems that allow you to get cross chat transactions in one block and hopefully that we won't need that anymore and then you can get synchronous cross chat transactions through optimistic roll-ups is a really good example so this is where you in ETH2 if you bring optimistic roll-ups to ETH2 you can now upload your data on 4 different shards so a user if they want to upload their transaction they can access any of those 4 shards and so now they're not it's you get a lot extra there as well that's pretty cool delayed state execution it's kind of all fault under the same the same branch of things so when you think of cross chat transactions how does this affect DevX in general let's just like focus on asynchronous side, the synchronous side it really shouldn't change that much let like Carl and those guys talk to you about that a little bit more on the async side the difference is so HLLs higher level languages, DSLs so we vibrate different things like that so this these tools should just provide proper tooling now so what that means is as a DAP developer you now need to operate under this idea of you are going to make asynchronous calls in the smart contracts that you write that you need to be aware of that you need to be open to that and then there may be some things that you need that require more than just asynchronous calls so a different tooling that should be included in these systems which can be akin to programming across threads so if you need to make an atomic transaction you have the hotel train problem there are different approaches that you can go with, you can go with a locking system read write which essentially rolls down into this like yanking model you can have a two phase commit schema which could essentially also be kind of a messaging model you can have a message driven approach one of those ideas is like the actor model and then just simple asynchronous calls so kind of depends what you need in the project you as a developer are going to need to understand which one of these tools depending on what you need, a lot of developers will probably only need asynchronous calls others may need some type of concept of an atomic transaction across multiple contracts that's when you just need to use the best tool but most of these toolings should be provided and developed within things like solidity and fiber so just think of it as you're dealing with asynchronous calls now, promises, whatever else and the latency depends on how long it takes for reduce for finality time and or if this this new direction is taken you can get that latency just in one block which is actually pretty cool I think that's a lot of people can work with that other changes in DevX yeah so POS some things you know there's some speed increases some execution cost may decrease due to there being multiple shards I don't want to draw too much attention to that line web3, ethers, js these things, the web3.js they actually shouldn't change that much if you're dealing with an eth1 the eth1 EE their core machinery underneath it all will change for sure but for the most part if you're dealing with the eth1 EE it should not change drastically there might be some changes from your perspective especially if you're writing new contracts it might be shard aware but again it shouldn't itch much in fact I imagine that there will be this world where there will be standards around these new EE's that are built and so web3.js can basically be portable to these different EE models and I think this is like a really cool space and I just would love to see more people kind of ideate get involved in some of these things over time we're likely to have more wasm-centric smart contract languages so I don't think Solidity or Viper were necessarily built with a wasm-centric approach so that will likely come and affect developers as well so should I build an EE so if you're a protocol researcher or developer absolutely you want to experiment with new accumulators you want to experiment with different approaches like that, yeah you should do you have an idea of what you want to make sense as an EE does your application need more flexibility or enhance scalability then you might want to work on an EE for the application but also you may just get left behind if you don't learn this and you don't learn to leverage that to your advantage or you don't at least learn to understand what's happening behind the scenes I think there's a good amount of discussion regarding EE some parties believe that there should just be three or four or five EE's and that's what everyone uses others believe that the amount of EE's is an EE for application and then we build a model by which EE's just kind of communicate with each other and so it's kind of these two camps I think they all have their trade-offs pros and cons and whatever happens I don't necessarily think the things that I mentioned just prior should actually affect that developer significantly so yeah that I hope gives a good overview for everyone of what phase two is definitely optimistic about things there's a lot of people much smarter than we were working on this and building EE's again I just want to call out Alex Paul and Casey here I mean phase two is phase two because of Casey in large part originally anyways there is a session where you know the EWASP team has worked on a ton of different EE's and it's really really cool to see a lot of the innovation and creativity here that's happening it's really exciting and a lot of smart people are working on this so yeah does anyone have any like overview questions yeah someone who's following this from really far actually two questions so one is I believe that back today I think it was an interview with Justin Brick he was saying that there would be only one execution engine across all charts and now this is new to me so I was wondering where the shift has happened and what the rationale is behind it and the second question is so if we imagine different charts having different execution engines how do they come about I mean does the foundation decry at some point will have 1,024 charts five of them have this execution engine etc. I assume it's not the case but how do you envision that to come about yeah so I think Justin kind of takes you know one side that's very very strong and then there's kind of the other side which is just let everyone, every doubt developer write an EE so these are kind of the two different ends right and I think it hasn't been resolved like those discussions just need to happen and this is part of what we're doing by prototyping and looking at it I tend to think that there's probably just going to be like a gentle balance this is my opinion in no way is the opinion of you know all the researchers or research team and Alex and Casey and Danny and everyone here might have a different opinion than me my general opinion is that there's a delicate balance where there are like four EEs that are kind of entry EEs you know you write smart contracts or they already have the standards they're just easy to deploy to you don't have to think about complexity what accumulators you're going to use you don't have to think about these things right but if you want to then you're free to launch your own EE that can then integrate with these because I think not allowing developers to launch EE's is like I don't know this is going to be a huge area of innovation where we can learn so much and build that up so that's my general thought although I think most DAP developers will be operating across maybe four major EEs that are the most popular I think to your question for deploying an EE the idea is that you have to pay a fairly large sum to deploy an EE onto the beacon chain and then what is unknown is whether that EE is now just available across all shards or whether you need to pay to deploy it on each shard individually that you want it to operate on so a shard can have multiple EE's running does that do you have access to all the shards just by deploying onto the beacon chain or do you have to also deploy on each of the shards that you want EE to run on so I think again these are research questions we're not entirely sure but I think we'll probably have answers you know hopefully fairly soon in the new years and time as we continue to just prototype these things so as DAP developer wanting to develop on one EE I would like to find other people that have an interest in that EE and pull our resources and then deploy that EE so maybe one really good example is the ETH-1 EE so that's hopefully that your foundation behind that provides kind of the funding to deploy that and there's already an ecosystem around that but now let's say there's a new account model that people you know support in case it's going to say something in a moment a new account model that people support and it uses this cool new accumulator format and a bunch of developers want to work on that and start writing contracts for that and you kind of organize your own your own approach to do that but you could also just I can just you know play if I want to depending on how much cost I want to deploy an EE an experiment with it on my own and I could do that but again I think you know there's a lot there in case you wanted to comment or if you didn't want to comment so now we have these numbers one where it's just two where it's just four it's just like oh yeah I think you kind of covered it but essentially like the beacon chain is this highly available component of the system if you're syncing any shards you're also syncing the beacon chain and so having data deployed on it and having these in use deployed on it can and must be expensive and so I don't see a world in which it makes sense to deploy in a world in which dApps are highly used across the globe the cost model can't be such that you might some mega casino dApp might deploy their own EE is some strange advertising thing but generally in terms of being close to other dApps and operating it in ways of scale and also the cost model of deploying something within an EE would be so much less that you likely operate within an EE or within an EE2 EE or within some other EE they're tools for dApp developers but they're more like in my opinion more akin to protocol development in a traditional sense than they are to dApp development I think this question about who can deploy EE's and how many EE's there might be is biggest unresolved question in which there's a range of opinions and I think the split came from the prior to the idea of minimal execution when you had Phase 1 and there was no execution in Phase 1 and there was only execution in Phase 2 and Phase 2 was supposed to be this execution that was maximally similar to ETH 1 and so shards you'd have 1,000 copies of ETH 1 and the execution would be stateful like ETH 1 and it would you know prescribe it would have an account structure you know a regulatory structure everything like ETH you know like ETH 1 but better and the shift to minimal execution said we'll know you know these execution environments the execution environments will define these an execution environment it's just a fancy word for a contract on ETH 2 the way I see it on the other slide that said not necessarily analogous to smart contracts I guess the main difference here is on ETH 1 a contract cannot run EVM code but there's an old EIP like from Vitalik to add an opcode like run EVM so an EE is analogous to a contract on ETH 1 with the extra opcode of run WASM or run EVM and so an EE you can take a viewpoint where the EE would sort of define all this stuff that enables people to have a good user experience and defines how contracts call each other and so forth and so in the old model it was like forward developers would develop all this stuff at phase 2 and then DAP developers would come on and then use deploy DAPs on ETH 2 with minimal execution it's kind of like at least my philosophy with minimal execution is this is a massive challenge too and not only is it a massive challenge but it's also a very opinionated approach and in my opinion it's kind of authoritarian and dictatorial to say like well we want to restrict the EE's that can be deployed and EE's just WASM code so why would you restrict what WASM code people can deploy people should deploy and run whatever WASM code they want like they can run any EBM code they want on ETH 1 so I will and I my opinion and this is like for in terms of developer experience but I will say to the opinion of Justin which is EE's we have this world in which you might end up having a very fragmented ecosystem where if you instead have some sort of entranity to start and this is like the other stream of the opinion where it's ETH 1 and it feels like ETH 1 it feels like the account models we know and love and one a very clear developer story on how to build on this and two like kind of creates this unified experience where everything's initially a quick speaker system so that's the counter to the dictatorial view of the opinion I'm not certain exactly where I live but I just wanted to give that up hold on okay okay since you cannot update EE's and it's fixed if you do end up with the entranities and you keep improving those EE's how are you going to deploy them is it going to be hard to forward forward and I'll give the mic back over I think it's very likely that at least one of EE's is still potentially updated by hardworking and social consensus the ETH 1 and EE so I don't think that it's right I think that it's going to be very hard to say this is ETH 1 now and you can never change it like everyone exists and operates in this there's tons of stuff there and like I think the social consensus is still hard working it might be very well served but this is not quite what we're here talking about I wanted to break the discussion a little bit here because it became very deep and I actually wanted to give the microphone to you guys and listen to questions, reactions, thoughts you guys can come up but I don't know so the question is how are we going to come to Schroding but Schroding will also relate to some kind of partitioning state right now I'm missing a lot of discussion on how this state being partitioned especially we have now different EE's and there are different states and how this EE will deal with that and all maybe this will kind of do some changes that work on kind of dependency so I'm not sure how this EE and together with Schroding what this state commission will do looks like yeah so your question is in a Schroding model with cross-show transactions you end up getting your state fragmented across multiple charts okay and how do you deal with that especially in different EE's especially across from EE to EE you're saying yeah there's several like so first we have one EE define what account state model and how this being partitioned how to define this partition with this EE probably another further question is this possible EE change the state of another EE and so so and because this too highly related to how this state would be partitioned all in the future so the current proposal space and again things are all under research is so you can build in building your EE you can basically have standards by which another EE can call onto it probably make some type of mess actually to that EE but again this would be like an asynchronous call you would have to make it to EE you're not guaranteed that it will reach that separate EE until maybe a couple blocks whatever else is there but I think that is more up to how that EE is written what it allows other EE's how it allows them to call into it how it allows them to integrate into it so I think that's a big space that needs to be explored within one EE the fragmentation approach if we get cross-shared calls to be pretty quick then I think that sort of solves that problem but yeah yeah just taking a step back here have a pragmatic question if I want to build an EE how do I get started like is there a documented ABI or something that is available no how do I in this previous discussion you may not would not be aligned to me you may not be aligned to do it because there is only one strategy but so the tool will mention Scout and there is an intro search post which explains the API but I really love to definitely just move that into a spec within the Scout people and there are a couple of examples there Scout is the sort of interface it's just a prototyping tool we have for EE's specifically and it has examples and how to buy these EE's so that's the best source to use it so I can all skip from a time perspective like what's going to happen is that the beacon chain will get deployed first and around that time you will also start seeing clients that deliver the data availability part to phase one in test nets and then a tool like Scout which is a prototyping environment basically gives you access to this data availability and a little sandbox in which you can play with EE's right so that is kind of available now and some of the primitives are being developed so the idea would be here that after having heard Will's excellent explanation of the new toys that you'll be able to play with like Fred's and of course you can continue to work like you did in EE's one and that would be like you buy 16 core computer and just use one of them right but then you get a bunch of new toys and the idea here is then that you can start experimenting with developing the primitives that will help like end user developers to have a nice fluid experience of doing development here sorry so when I make a transaction you said that I have to provide the management groups right for the state so but how do I know what the state route is the moment that is action is executed so so this goes back to the API and what the current API is like five methods for base 2 execution EE's one of them is get current get free state route so this is on the chart and so in the wasm code you get the free state route and then the other one is return post state route so you do logic then you return the new post state route and then the next transaction when it calls get the free state route it will get that new so when that was returned from the previous transaction what was the question within EE or from the outside what was from the outside what was the actual execution of EE so this is so getting the state route you can that's available from the system that EE2 provides but if you're wanting to get the state and you need the state in order to generate a multi-proof this introduces basically there's a new actor in the system these would be what we call state providers and there's three different proposals one of them makes the state providers even more essential and what we call relayers one of them kind of removes the relayer aspect and we just need someone who will provide state to you and so this is an open area of research I think we're all going to be looking at this start diving into this after DEVCON collaboratively and hopefully we will be writing some good research posts on this in general though from the simplest model there's this idea that you could pay to access state so maybe there are actors in the system that would require some type of microtransaction in order to write to give you a witness that will end up writing state that gets submitted as part of a transaction maybe they give you state for free to read in order to kind of build that model we already have generally altruistic actors right now in ETH1 I think in general the idea is to eventually go away from the altruistic model you know ETH1 is generally supported by a system of altruism right I think there is some in the data file as well the way the system is built is that you go from a pre-state you execute something into a post-state and that already tells you that from the perspective of the ETH2 consensus you have to provide a pre-state to the function that is executing and then ETH2 verifies that the output is correct per that calculation so something has to provide that state and what that something is up for debate do you pay somebody, do you keep track of that yourself you'd have to have a good nice computer for that will this be provided by a service network of some sort that's no question I'm just trying to understand what are the possible ramifications for end user experience with ETH like is that in one set for the end user would have to be aware of I mean it depends what ETH your contracts operate with so if you're working with ETH1 ETH there will be very very very little changes mostly all the machinery and tooling behind the scenes what network your wallet is connecting to but the interface for your wallet shouldn't change drastically the interface to Web3.js shouldn't change drastically you're still using Solidity you might have a few extra tools now someone builds a new account model that has some new constructs hopefully there are standards that are built that kind of give you we continue to not need to change these toolings drastically that's my hope I think we want to strive for that yeah in terms of an earlier question with partition how do we expect to maybe speed up ETH1 if we assume that this ETH1 ETH going to have the same throughput as current Ethereum 1.0 maybe ETH will have a higher but if you want to significantly improve the throughput on a system similar to ETH1 we'll be trying to partition it we'll be trying to deploy multiple ETH1 ease or ETH1 charts and then will they be like different networks like mainnet 1, mainnet 2, mainnet 3 for the guests? so I generally think bringing all the contracts that exist in ETH1 right now when you're bringing it to ETH2 the scalability that you get from that won't be drastic like you're still running on one shard because contracts that exist are immutable and they're not shardware yet you can't just change the contract to make it shardware and so we'll have some speed gains from the proof of stake and the execution time that we're shooting for I think where the scalability would come is new ETH1 contracts that can now be shardware so new ETH1 contracts that can interface with old ones and can be written in this approach that it understands that there are multiple shards so maybe a lot of DAPs are okay with the minor scalability gains by moving ETH1 into ETH2 but now they want to take care of the extra tooling so they might have some type of social consensus to move to a new contract that now utilizes shardware-specific tools that's my general thought Danny, Casey, do you have additional thoughts? Right, so moving ETH1 into an EE opens up a number of questions like what do you change, what do you have to change what can you change just certain things like the difficulty output, like what does that mean in this new context, that you have to define those things as other opportunities potentially migrate from particular marble trees to binary marble tree and another one of these opportunities is to open up the notion of sharpening as he was saying so you could easily just deploy it on one shard, it doesn't have any notion of sharpening it now exists in ETH2 which is kind of a single shard paradigm it does have access to make groups about the entire data layer of ETH2 at that point so you could get a lot more scalability out of some of these layer 2 schemes that were discussed earlier I think one area of research one thing that we'd like to push for is to open up the shard of paradigm within ETH1 and as he said existing conducts would know about this but no one's good still definitely up for debate on what would change during that migration process with the ETH1 Are there techniques for dealing with situations where you provide the pre data but that data changes in the transaction between mine to say I want to send you a transaction but my transaction isn't mine until like 10 blocks from now and in three of those intermediate blocks your account balance changed yeah yeah that's actually pretty easy to do just with the proof data if you have two merkle proofs and they don't know each other at the time of being submitted and then one transaction gets processed then the other proof as is invalid but there's already information there to update refresh the proof even if that's like several blocks or maybe several days apart if the chain is congested or something well if it's several days you would need all the intermediate transactions that were processed in the meantime but like you know you've held on to that information in the past couple of blocks yeah and that's what we've been referring to as maybe state providers or relayers and that's where they come in because they will have this role of updating proofs and so forth so are some of these smart contracts associated with a specific EE or can they run in holidays too in the normal case you probably have a notion of a smart contract being deployed within the operating of that environment you do probably have the opportunity to think about new type of layer 2 constructs that maybe in a layer 2 context kind of exists within multiple EE's but that's a new area of research and design that's not really going to be explored that much so if I create an EE how can I develop smart contracts for it so the EE would define uh in most contexts the EE would define some sort of account model some sort of deployment model some sort of just all the things that you think of is like an ETH one like those things are now like in terms of account model deployment all that kind of stuff and how transactions are interpreted and modified in the state that would be then definable via the EE the ETH one EE those mechanics will operate very similarly but if you wrote your own EE and you wanted a notion of contracts you might have to borrow some of those concepts or make new modifications and new ways in which things might exist and operate so again like yes these in some respects look like contracts but they also can look like protocols right they're like a contract a user layer kind of protocol via these this new type of EE can change contract that's EE and I think one of the I mean the biggest difference between deploying your own EE versus deploying a contract you know like a child contract of some parent EE is deploying that EE might cost say $1,000 that was like never you know italics pull that out of the air a few months ago whereas deploying a contract italic contract underneath the EE would cost maybe a few cents and having paying the $1,000 to deploy a EE just means that the code is like already there and so on transactions within it or as if it's a child contract maybe you have to pass in the code every time and your throughput would be less this is where the trade-offs are yeah at the same time you've also now fragments in your universe from other like you now require like asynchronous communication to another EE so you're missing out on some potential synergies just deploying into an existing EE and to speak to that like $1,000 because the beacon chain is this component of the system I mentioned earlier that like every node has to run this thing to be able to sync and run charts and sync data charts we have to bound the state size so like this capital is likely burned or locked up the economic model is still under investigation but it would be capital intensive because the state needs to be succinct and bound I prefer locked up just in the same way if you want to become a validator your validator account is an account underneath the beacon state so every validator globes the beacon state so in order to limit the glow of the beacon state it costs $32,000 to become a validator and it's not burned it's just a deposit so we can take the same approach with deploying EE's right then you get the idea of an EE disappearing at some point so there's a like all certain things to explore in the minimal dictatorial model would you have arbitrary state and arbitrary computation on that state just like really intense data do you collaborate a little bit like intense data or you know you're like locking up some amount of capital to keep some kind of state because everyone needs to hold this state and I'm just curious if you would allow in this model arbitrary state and arbitrary state calculations yeah so I guess I should qualify within the dictatorial model where it is the architects of the system restrict what EE's you can deploy like it might not make a difference if it's the type of EE where users can deploy any kind of code they want under that EE so if that is the case then there's you know well what layer are you operating at and maybe it doesn't make a difference users can run any code they want anyway and if that role to EE is very minimal but if it's like if these sanctioned EE's restrict the type of code that people can deploy then yeah that's that'd be a very dictatorial which is certainly not the intention on that extreme of the yeah sorry how does this just been zero because I also think it would be sad if that were the case because like you want people to have a fade and try a lot of new things I think that's yeah it's our goal so my understanding is basically that each EE has it's basically a lateral movement kind of way so I think so and also I have to define how these transactions can be modified but maybe in the smart counter like EE is the one and also maybe it takes away anything that can it takes up scripting so one question is support like also patterns can also deploy these EE's and these basic lateral models also store corresponding EE's arrival in each EE's right and if there's a for example in underflow blood that crevates even in amounts of EE's or it intentionally crevates some EE's actually in this new EE or maybe existing EE how to prevent that happening because like it looks like everybody can deploy that but also if they can define this I can't yeah I think so balances of EE's are not going to be significantly different as an example than a validator balance they have the ability to pass there's an ability to deposit funds into EE so as far as like funds being printed out of nowhere that wouldn't be an issue but what could happen is just like in EE1 you need to have a very significant audit on the smart contract EE would also need to go through that same thing because you wouldn't want it to then users bring their funds into it and then you wouldn't want it to steal your funds right so just to be clear from the notion the only thing that owns ETH in this model is validator accounts on the beacon chain and from the layer one perspective and these EE's they have a certain amount of ETH assigned to them then within the EE there's some sort of account structure that gives you right to access it within the EE or essentially transfer another EE or become a validator so the minting of ETH outside of the normal like validation mechanism would be a like that would be a layer one protocol bug like that would be something that the like kind of sandbox world of any EE which is just bosnco should not be able to do and if it did like that would be a layer one bug in which we would also be well I guess the question was like if inside the account of that EE is some minting of ETH right and I'm saying it's not possible unless there were a bug in the layer one protocol what about the bug in the EE no because the EE only has access to the ETH that is in its account so there's 10 ETH in this EE and there's some sort of sub like account model that allows it to move around but nothing that's moved within that EE actually changes the view of the amount of ETH within the layer one protocol but it might suddenly have 11 ETH inside it's internal and I'm saying the only way that that is possible would be if there was a bug in the layer one protocol not within the EE the EE itself might be buggy you're saying the same things you're just you might say that there's a lot of pain the EE you have a shitty account model and now like there's 10 accounts all of them have one ETH except one of them has two but in terms of their right to that that model within the that ETH within the actual entirety of the layer one system there's only 10 if you write bad EE's you can have a terrible user experience I think do you have an incentive with respect to layer one there are interesting questions with respect to like an exchange that might honor that I think the way to think about it ETH too I think the way to think about it is that within the EE you no longer have ETH basically you have representation of ETH there's a disconnect there yes you talk about it I don't like a wrap talk about it I'm just going to see how it works yeah I see it a little make one more comment it turns out also that these EE's what we're talking about you can basically build an EE inside of Ethereum one that is actually what these roll-up chains are in reality because we have these balances you go into an EE one of these roll-up chains and then inside of the roll-up chain you might have coded it wrong you might have some issue and you go out and it turns out that there is a very very one-to-one correspondence however if you do it in ETH one there are certain things that you can of course we don't have the scale and also it's much less efficient but there is like these concepts are somehow not not a special they're not like insurance anyway I don't know I don't know what I'm trying to say but thank you for listening we have a few minutes left like one or two does anybody that hasn't spoken yet want to speak up now your last chance to ask a question so my understanding about EE's is very rudimental but shouldn't vegan chain validators catch the bug like if there's a bug in the EE which inflates the wrap ether or the EE ether within that contract shouldn't they just say that oh shit there's a bug they are just trying to inflate the their own intercom thing and just not execute that that new state that the EE proposes vegan chain would catch okay as validators not actually have go from 10 to 11 ETH out of nowhere but if the EE has a field under its own state that says wrap ether and it doesn't check like guarantee that wrap ether amount is equal to the actual ether amount you know in the vegan state field then the EE's wrap ether balance could just inflate just like a contract on E1 the wrap ether doesn't have to be equal to the ether if there's a bug in the contract okay so so vegan chain validators are not really looking out for this these kind of issues yeah part of the system is such that and part of the fact that there's a stateless system validators don't have to know about EE's they know that the code exists they know that given a block they can execute on a free state group a block in a post state group but they don't have to care about the internals at all to validate the protocol and if they did the system would likely be much less scalable because they'd have to know about the existence so it's kind of like definitely segments it in a way that the validating participants don't have to worry about the internals and thus would not catch that part of the protocol is just to execute the code to the EE not to look at the EE and be like does that make sense just like that in E1 blocks miners just run the EBM code thank you to a whole question very quick anybody else that hasn't spoken yet this is your challenge you have the researchers here maybe one question adding to this so now as a deep developer I know that you have to trust like the EE blockchain but I have to trust the execution environment to create this so they don't have a part true if you're going to run on EE it should be well-vetted it should be formally verified and it should be something that makes sense for you to actually operate on there I expect a lot of activity to have happened on large well-vetted, well-tooled EE's even if there's experimentation with all sorts of other things also that's a status quo at the moment you need to trust the EE blockchain and the EBM the EBM is the execution environment it's a status quo if I want to make sure that I just stick to like EBM because I know it's like trustworthy and it's proven all right if there's a bug right now in the EBM we might socially coordinate to fix it whereas if there was a bug in a random EE we very little likely wouldn't oh really well yeah but for the EE there's also the professional that's why you don't have to talk about that I just want to go back to the first part with a domestic product and I didn't really understand because you asked as a dev developer how we could use these new features I didn't really understand how it would be made available to a dev developer we'd be like understood there wouldn't change most can use it but kind of too likely to be around it how can we take it back to end of this great question so optimistic rollup the biggest thing that I like about optimistic rollup is that we can have the same developer experience like as similar and the kinds of changes that Danny was talking about the block difficulty and we have to change message.sender we have to do a kind of stretch and it's just some weird requirements a couple things will change but your smart contracts will work kind of as they do today basically it would require forking or forking waffle truffle whatever and adding in the little changes but you would write a smart contract deploy the smart contract and you'd have a quote ethereum unstoppable smart contract in layer 2 and that's definitely a goal a goal for all of us I have a question can you put an optimistic rollup chain inside of an optimistic rollup chain? absolutely yes so Vitalik has given me you know he gave me this like crazy mind-blowing experience where he was talking about he was like ok you can think of optimistic rollup as different like execution zones so the the deeper you get in this recursive optimistic rollup thing for execution computation you have to do because you have to do the optimistic rollup at the bottom level optimistic rollup above it and then up to the main chain so it's like the massive amount of computation but that gives you scale because you're doing more compute that means you can scale further and so you can even design a system where you're like doing this intentionally where you're segmenting different levels of you know here's the deepest level and some amount of depth anyway I don't think that that's like really that practical I think we'll get like one level deep and we'll probably get 90% of the way there 99% of the way there and then maybe throw another level because we're like crazy but you know we can and we could and that's kind of I think the big thing that I was that I really want to communicate is that the EVM as it is is this like turn complete state machine and we should all be like really grateful for the fact how general purpose it can be and so you can write these amazing systems like execution environments like optimistic rollup and then you can write those systems inside of themselves and this computation stuff that we're dealing with is pretty wacky so I definitely definitely enjoy alright I think we're gonna make a cut there those beautiful words from general purpose computing and as you may have noticed during the session it's still very very early stages that means opportunities to affect where we go with the system it also means that like it's not ready for end user developers or end user applications yet however I would encourage you to join the conversation now I would encourage you to explore these new tools think of ways that we can use them once we get them out there