 the concerns that this proposal has been to address, mainly have to do with kind of other things that people have been complaining about for months, the biggest one being like e-markets and relay markets. So I will kind of explain the problem first. So the problem is like status quo is basically you have a bunch of shards and each of these shards have blocks but the shards only hear about each other at a kind of very delayed schedule and like this is because like somewhere in the middle here you have a cross like into the beacon chain and this takes about one block and so basically like in the status quo proposal add base layer for one shard to work like state routes of other shards that takes like about six minutes and this is obviously bad for if you want to be able to have any kind of cross shard depths and the proposal that we've been suggesting to get around this basically has to do with these kind of optimistic states. So basically you have this kind of optimistic mechanism that says oh well if you're over here then you can make a kind of conditional block that's this conditional block just that says oh I think the state route, I think the state route of this shard actually is this value and if it is this then the output is gonna be this and you just have these blocks and it exists as a thing on way or two and then there would be these different out of protocols for achieving this and at some point later like there would be some later process over here that kind of starts through all these blocks figures out which ones are kind of correct which ones are not correct and kind of threads everything together, right? So and step one blocks over here or kind of packages over here kind of publish ahead of time opinions about what happened in other shards and these opinions are not necessarily 100% correct or things haven't based off of those opinions until things have dependencies and clients can like basically figure out ahead of time like how these blocks will get kind of glued together but then there is a process that kind of glues these blocks together on chain kind of in the future and execution environments are supposed to be able to implement this. The reason why this is, like this is not very nice is basically because if you want to look at this from a point of view of like IMA transaction center and I'm Alice and I want to send some points to Bob and I Alice, I have coins on this chart but it turns but what I want to do is I want to send some points over to Bob and you have Bob over here and Alice wants to send some points to Bob and then Bob wants to be able to do things with those coins immediately like sending them to Charlie or putting them into Uniswap which will automatically send some data to Charlie but if Alice has coins and Alice sends points to Bob then the coins like the underlying based protocol level ether would not get over here immediately and so Bob does not yet have protocol level ether Bob has kind of voodoo optimistic ether which will become protocol level ether but because it's voodoo ether is all year two thing like we can't trust block proposers to understand what voodoo ether is because block proposers are not expected to speak all of these execution environment languages and so basically Bob has nothing with which to pay block proposers until maybe a subway later in the future his ether becomes real which will happen over here and so Bob theoretically has coins but cannot pay transaction fees so this has historically been kind of solved with this kind of magic catchphrase we really are markets but we've been kind of thinking about really your markets recently and like realizing more and more that like first of all this kind of design like it seems to minimize layer one complexity but it has like a lot of layer two complexity around all these optimistic games and really your markets get monopolized easily there's censorship issues and a bunch of other nasty stuff so that's the problem so here is the solution the solution basically says we have a bunch of different shards and what we're going to do is basically have a crosslink in every slot so these shards are going to be a crosslink to the beacon block and then the shard blocks of the next slot they're going to be aware of the beacon block they're going to do things and then they're going to be included over here and then you have another beacon block then over here you would have more shard blocks and so you have kind of a much tighter coupling between the shards where each shard like any block on any one shard is aware of everything that happens on previous shards so the reason why this is nice now is because like on top of this we can also add functionality for an improving ether between shards within a single slot and so if Alice has some coins over here then Alice over here can just make a receipt that says hey I want to give points to Bob and you can even have functionality where kind of the underlying execution environments can even transfer you from one shard to the other and then Bob who's going to be over here Bob will basically be able to include the receipt and Bob actually has ETH and because Bob actually has ETH then over here Bob can immediately use ETH to do things like paying transaction fees so I guess like when we look at like what execution environments are for like execution environments kind of serve different roles in ETH too so one of them is they describe account semantics so basically like is this like balances UTXOs, receipts and they also describe and they also kind of create a storage system and they also had to handle like basically cross-shard they also had to support things like optimistic state and like they would have to support and they would have to come up with some like special purpose like fee payment strategies so making a kind of full execution environment like have all these components and this proposal basically says you don't need to do this anymore and you don't need to do this anymore because the protocol is kind of tightly connected enough to just be able to do this stuff natively so to kind of poke a bit more deeply one basically the way that fee payment could happen is you have Alice over here and Alice is just going to send the transaction and the transaction would have bottom execution and the bottom execution would have one of the outputs that it could give is it could give an opcode that basically is like fee payment and fee payment is basically an opcode that says take ether out of the current execution environment and pay ether to the current proposal so if we have a special opcode this is basically the pay gas proposal you have like this kind of top level lots of opcode that just says take ether out of the execution environment and kind of give it as a fee and Bob whoever is a walk proposer does not need to understand execution environments because Bob can just run code and see if Bob gets paid the fee and Bob knows that Bob will get paid the fee so is there not Bob the proposer knows that the proposer will get paid the fee now there's stuff that you can do on top so for example if you want to allow transactions to be merged to share market proofs and all that nice stuff you can do this but that's more detail so the big question here is what sacrifices do we have to make to get this? so if we do this without changing parameters then basically the overhead is if the beacon chain is going to increase by a factor of 64 and increasing the beacon chain or specifically the attestation processing overhead is going to increase by a factor of 64 now that's kind of and the other problem that we would have is that the number of active validators that we would need per slot in order to actually be able to process everything and every shard goes up so basically if we're here and shards then you would want to have 1,028 or 128 times 10 and then if you want you basically if we're still having to walk the length of 64 then we would actually need 128 times 10 times 64 and it's 1,024 and so we would need there to be like basically more ether than actually just validating to be able to process everything in a single slot or else we have to cut down the epoch length and if we cut down the epoch length then we kind of compress things even more and really blow up the chain so the second part of this proposal is that we kind of do some parameter tweaking and the parameter tweaking that I'm suggesting is shard counts goes from 1,024 down to 64 shard block size goes down from goes up from 16 target 64 max to 128 target 512 max and so this all together this is obviously divided by 16 times 8 and so we obviously get times one half times one half shard throughput or total system throughput but I'm arguing that this is fine and the reason actually we're also doubling the block well instead of having small slot times we would have larger block times so there would be another 2x2 so we would be kind of one quarter basically cutting down the throughput to about a quarter of what it was and my argument is that this is fine because like realistic way there's not going to be a 1,000x increase in the number of users anyway and we can just kind of start these numbers a little and kind of conservatively ratchet them up over time so there's kind of other optimizations that we can so I guess so first of all just to clarify like in this architecture even though you have a crosslink every slot from the point of view of FFG you would still have epochs and FFG epochs would still have a length of 64 slots I mean if we want we can make it 32 yeah what's the tradeoff when you reduce that okay so the tradeoff we've reduced so let's have this kind of overhead analysis so from a overhead point of view right there is basically kind of there's a couple of major components to overhead one is ECE pairings the other is like ECE additions and kind of popty reads and a third is like end of epoch kind of balance updating right so idea basically ECE pairings are they're actually ECE pairings are proportional to the number of crosslinks that you have but not to the number of total validators and so basically ECE pairings are the thing that this proposal might end up going up by a factor of 4 but I have an idea for instead of making it go up by a factor of 4 like maybe only make it go up by a factor of 2 the idea here is basically that like delayed attestations just kind of get merged because you have to publish something else for delayed attestation and then ECE adds this as proportional to the number of validators this is like basically validators divided by epoch length and this is also validators divided by epoch length so if the epoch length goes down then more blocks will be these end of epoch blocks that take a really large amount of time to process and you know we basically kind of have a have some kind of trade-off to make here then would you agree that like the status right now end of epoch blocks are kind of closer to being like take considerably longer to process than like 120 impact attestation blocks no I think it's actually the other way well I think that even this also comes from the attestations at least in the lighthouse benchmarks that I've seen the epoch portion of that even at like 4 million validators is a fraction of the actual block time so that's actually not as big of a concern okay I mean if this is true we can also look at the epoch length down from 64 to 32 and that's kind of actually it's good for other reasons because like it cuts in half the amount of ETH that we wouldn't need for this to actually be able to process all charts in a single slot another parameter is the ETH correct and I think in general I would not support reducing ETH I mean like if I was coming up with an ETH from scratch I would say 64 instead of 32 but I mean I feel like the mean value of 32 is worth more than the mean value of 1024 and like if we have to in the value and the mean value of like gains the composability is also pretty high so sacrificing this to kind of maintain one increase the other seems good that's yes so to what extent can I rely on a cross link because shards can fork right so presence of a cross link doesn't indicate that anything is final at this stage okay good question so one thing that this proposal does is like in this one one thing you might notice here is that shard chains don't exist right now in the proposal I have shard chains do kind of exist in the sort of limited ephemeral form and I can show a kind of minders value but okay so basically in this model if I publish a transaction kind of here's the increasing levels of confirmation I experience right so step one I publish a transaction so I publishes a transaction it's floating in the air right now Bob knows that unless Alice double spends her Bob is going to get paid right like this is like this is the same as like Bitcoin you know like zero con for whatever step two this transaction gets included into a block into a shard block proposal now there is a bit more confirmation step three then you have cross link votes and cross link votes starting accumulating on the shard block proposal and then you get a bit more information step four a shard block gets proposed and this shard block actually like two-thirds of the votes in the shard block for this particular or two-thirds of the votes over here actually are for this block in which case now the basically the vegan chain kind of stands behind this block so that's option one option two is maybe less than two-thirds of the committee votes like support this block and then there's two options option two A is that two-thirds of the votes support something else and if two-thirds of the votes support something else then this guy then this guy's gone and this transaction should just wait until it gets included somewhere else option two B is that maybe there's two blocks and maybe none of them got two-thirds right so maybe this one got 40% this one got 40% and the other 20% are just offline this happens and these could be like two can be conflicting blocks so the proposer gets slashed or one of them could be like the zero block right so if the product proposer published a little bit late then he might have a 50-50 if this happens then what we have or the third possibility is that there actually are 70% but the vegan block proposers like being a little evil and deciding not to include things so if this happens then actually I'll just draw it out let's say you have another vegan block include, include, include, include but this is not getting included then what happens is that a proposal over here basically it would be pointing to this and it would also have another pointer to this so this is like shard zero slot this would be shard zero slot 4 a slot 4 proposal basically a slot 4 proposal if it's pointing to a vegan chain which where in the previous included slot is less than 3 the proposal has to get to an opinion about what happened in slot 3 and then shard like let's suppose without loss of generality this guy's legitimate then he is going to get one two of these things and he's going to include a cross link and that cross link is going to get an opinion about this and it's going to get an opinion about this and so this gets included so there is a delayed inclusion functionality the now so the only thing that you lose here is basically that over here cross shard communication is delayed once a lot because this is not aware of this but otherwise you have to wait and then the final level of confirmation is when this guy gets finalized and then Alice says that transaction is finalized so the only reporting that's happening with respect to a shard chain in general cases is if the actual vegan chain works because each shard block is containing a previous reference to the vegan chain so only the vegan chain could you see the shard chain okay and then in this terminal case where you have an emergency shard chain you do have less confirmation but it hasn't yet communicated with any other shards so you don't have the issue of like shards being dependent on each other if we assume that one of the shards is completely adversarial and puts in transactions that actually validate the state transition but since everybody is adversarial will it be accepted by the vegan chain or will they actually do some sort of validation on the state transition? Well it depends what we mean by like who's day because so I'll list who the actors are one actor is shard proposers another actor is committees so if the proposer is malicious and proposes a bad block then generally the committee will not accept it another possibility is the proposer is malicious the committee is also malicious the committee points the malicious thing and and then this vegan block kind of erotically points to invalid shard chain data if if this happens right then first of all like this is already violating the security assumptions so there's like to recover from this kind of situation like you would basically need to so I wrote a separate proposal about this actually so basically there's two kind of possibilities one of them is that if you have this block then eventually it will be discovered through fraud proofs and when it gets discovered through fraud proofs then like everything gets reverberated and you can't really revert less than everything because in practice the dependency graphs especially between those graphs are going to be super tight the second proposal that is basically that you have this kind of roll up like scheme where you don't like if it's unavailable then you revert but if it's invalid then you don't revert like you revert that in shard state but you do not revert the chain and you kind of revert the shard state within the chain and the idea for this behind this is that this allows kind of fraud proof assumptions to be kind of more subjective and more of a client side choice instead of like forcing a security assumption on everyone but like these things will only happen in the extreme case where a community gets broken by an attacker which in general requires more than one pair of watches yes it's not possible for two inflecting shard chain blocks to be included on the same beacon chain but they are separately come from a different beacon chain oh by different okay so what we're seeing is that there's two beacon chains and then there's like the shard chain which gets included over here and then over here let's say this one gets included here and then this is a zero block and it gets included here right is that what you're saying I mean this is two, I think two shard, shard block and block of shard chain yeah so basically so in this diagram right like the zeros with a hole between them are beacon chain blocks and the the full ones are shard chain blocks right like is this what you're basically saying what are the hash pointers oh like these are the hash pointers yeah so that fork at the end the dot at the top is one shard block and the bottom is a different shard block yeah and they've entered into a fork yeah this is fine you can point the beacon chains you can point the beacon shard chains it's a matter of so who has the most majority like both whichever beacon chain we have so when you're attested to the shard block you're also attested to the head of the beacon chain so you're giving the weight to the fork choice of the beacon chain so that was the result of the fork choice by adjusting blocks so actually I will point back some comments because there are some similar consensus which you use to kind of make up first and when you change first the consensus that is determining the fork choice rule of different shard chains actually already exist today so I mean also the the polo is also running well it doesn't need to be for example the shard chain of beacon chain can run on for example peel has to maybe out of peel so so I mean in this context that's the why the beacon chain run theoretically the dvj could run through fork if you really wanted to so I want to ask a question about the previous proposal that you shared before the interop around time the interop block and it was public alternative phase 2 oh yes okay with removing persistent state groups from shards and executing code on while the code on beacon blocks I think some of us liked that proposal because we felt it offered sharp transparency and in this radical new proposal is it we're returning back to persistent shards or is there a way to combine the two I actually think the two are in a similar philosophical direction and the reasons I would give for that are like one is that because you have like shard chain like the shards are kind of tightly connected to each other like theoretically you could have designs where kind of things happen on one shard and then they just get moved over to another shard the kind of synchrony here makes that easier and the second thing is that this proposal kind of the proposal of having execution happen on the beacon chain like if that's what's done then like the way that the system would be used is basically but like shard chains would generally be a roll up and like this is kind of pushing in any similar direction I think so on that part you had slots on the shard the slot on the beacon chain was the slot on the shard chain was skipped is it the slot 3 and 4 yes so the state is still on the shard chain correct yes the same with yours is on the shard chain so how could you do that on the shards in this particular case the move would be delayed in any other case basically like every execution environment is going to be or every one of these lines is going to be aware of all the post-taped roots over here so you could just you could have mechanisms that remove tokens, consume tokens you could have mechanisms that you could have a DE that just swabs they would state move to someone else on that spot so in general the thing that DE execution is aware of is it'll be aware of the it's always aware of the previous state root of its own shard and it's also aware of recent state roots of other shards but the way that you've used that abstraction can be a lot of things like you could basically the fact that this proposal is kind of biasing toward more layer one communication like basically it does kind of push it more toward this kind of idea of like shards as being things that you just do computation in is there no reason you want to keep the state on the shard just to have the missing synchronization as an option there's no real reason to keep it there oh you mean over here yeah so if you want that feature then we need to keep the state on the shard correct well every shard block is going to point to this previous block so I think what Kees was asking whether we could really just move the state from the shard oh remove all yes okay so in this proposal one question is why not simplify it even more radically and just say if this thing does not get included then this thing is just great and if you do that then you really are removing state from shards completely because like all the state the only state that exists is a state that gets faster to be chained do you give a lot of power to each of you yes basically okay way too much power because I can just say I don't like that block I'm not going to include it and not that block you never need we're going some sort of profit obviously and if we want to make things slightly better we could say something like oh we have a period of two slots within which to include these things and that would turn the power into a one of two but like I hear yeah adjusted nodding his head nodding in the horizontal direction so yes sorry I don't know if this is kind of what we said but in this new model if you doce the block proposal right this is one of the one of the difficulties all the shards halt shards means you have a cross-shard communication cross-shard communications will halt yeah I guess what are thoughts around that this is one of the reasons Justin is excited about secret leader election I mean also like in the long term all the optimistic stuff that we might be avoiding in the short term I think will still be valuable secret leader election protocols are not too scary but I guess especially if we do one with based on ring signatures then the next thing is the tour integration so basically it's like a two-step process the first step you have is pre-registration where you send a message which is secretly attached to your identity but people don't really know it's attached to your identity yet and this has to be done over a tour so that you don't do linking and then when you produce the block then you reveal your identity so I guess like it's not useful to do secret leader election until we have the tour back right okay that's good rationale for delaying it until phase two or three what do you mean by cross-shard communication what's the fault basically so let's assume that all block reporters become bad for a while then you have like shard blocks and then nothing will be included so then you'd have a bunch of other shard blocks and they would be kind of aware of each other in this direction so then you would have no cross communication until eventually things come back so let's assume some of them work some of them don't if we just include in each of the shard block a list of shards which were attached then we could because you do communication on those yeah you're right I think what he might be saying is that you could also have communication graphs that are partially intertwined so for example you could require this block to have an opinion on these two and you could have a look in a dependent way I guess the problem there is that when you add things actually have to come together you would have to include these and then based on that include these and base on that include these so the algorithm for catching up would be more complex so my understanding is that so for the shard block it is possible to execute all shard blocks if a shard block is included the next yes so in that case is there any talk incentive to include such kind of because somehow you can always not just us how do you resolve the cross shard transactions in the next block are they forcibly done or do you have the option so in the proposal that I have the kind of the total the total ethernetting is forcibly done but the receipts you have to kind of do yourself and they're an execution model thing how to make sure it must be successful so basically so the approach in my proposal is that you basically require each shard block here like if this is say shard index one then over here like each of these shard blocks is going to publish a marble tree that just says like this is how much we're sending to each other shard and so this shard is going to have to include the marble branch here it will have to include the marble branch here it will have to include that same marble branch here like this is my proposal I mean you could also kind of make it more forgetting and the boring question is like how to for example if so suppose in the latest case there are one of the blocks that are not being included in the block and later it will be included in the model shard blocks and so that means they will need to convert them that following shard block to include all these transactions across messages so it's not going to be forced but even on the ETH ones there will still be a load induced you let them pack up so if there is a limit to how much you can catch up with that one block one shard transaction do you have some assumptions or expectations on how percentage of transactions will be cross-chain and how that affects the size of the beacon chain? so in this I'm generally assuming that a large portion of transactions will be cross-chained so the beacon chain is not required to literally include all of one data for every single cross-chart transaction most of it happens through receipt passing basically the only thing you need is you need shard chains to be aware of roots from all these other shard chains and if he wants to move ETH cross-chart then you need to have some kind of basic ETH metagapability between different execution targets on different shards so we're here to talk about ETH execution in general and this is highly relevant just because this reconfiguring of the architecture has to kind of change how we might be thinking about pressure communication state execution and things like that but there were no other questions they all had before can I have another question? yes you always have four questions so in this proposal basically the fee payments seem to be much more simpler so maybe the fee markets are going to be way more simpler but we still have the question on how do actually end users propose their transactions and state providers maybe you want to talk kind of a bit on that sure yes state providers would still need to keep an eye on all six different shards so I guess if I'm a user then one thing you could do is you could say oh I have like full state on this and I can just like self package another thing you could do is you could just like talk to a folk you basically use like client protocols so like I see behind you you use like client protocols like client protocols if you don't have full state in some shard for some execution environment then you would just talk to a server that does the server gives you branches and then you have to package and broadcast right so it's simple it simplifies the fee market but there's still a notion of state providers relayers I don't know that term I feel like it applies to all sorts of things and there's still an idea of like other like weird general general verbiage hash preimage market free market one concern with this we had was in the previous one it was even a bigger concern because we had a 1024 shards and the state provider would have to have an eye on each of those but it's much lower number now and I think the reason we like the pre-intro proposal was that you could maybe have some kind of scheduling figuring out which shards give an execution environment to the server the state provider could just restrict a few I mean here you can also have like execution environments that are restricted to a few shards yeah so I have this basically like during this conference I kind of come to the conclusion that it's kind of funny that I think that E2 maybe the issue of just like in general aggregating transactions or aggregating state isn't something that the E2 specification actually addresses right so it's more about like I feel like initially if E2 is mostly about achieving consensus then and just you know like making state in execution is kind of abstract thing that is sort of funny because like in E1 all we've ever been doing is like we've always just been worrying about like state in execution like I mean the consensus portion of E1 is that like never breaks so like I mean all that ever breaks in E1 is you know like state in execution so I feel like this something where I mean this is there's definitely kind of bad and good aspects through this like one good aspect is that it basically means that like if we want we can just like bring E1 state in execution kind of rules almost as they are inside of this you know add some abstract stuff but and another about thing obviously is that this means that kind of work is being done to kind of allow lots of different kinds of things to work but kind of less work is being done to create one specific very good thing that works but like that's parallelizable it just means that we need more teams dedicated to creating a specific thing that works and one benefit is that it means that like if we do have a kind of good solid consensus layer that provides kind of enough base functionality that people want then like you can see we can see working on those other things so I do feel like we actually already have like a couple teams working with like the one the one good thing and that's like E1 right yeah I think like making E1 be a popular making E1 plus like simple cross-chart receive passing like be a popular execution environment at the beginning is totally fine yeah it seems to be kind of fairly network attached to preserving I had a conversation with some members of your team you know suggesting geth in the context of E1 it's like still like a crucial portion or some version or iteration geth is a crucial portion of this software stack it is that view into the state that view into syncing it like well there's a lot of things like E1 clients that are like outside of any specification like one thing that I mean there's obviously like there's transaction handling they're handling these things there's actually no need to specify these things because there's always better ways to do them in any kind of software and this is just like a software problem and I feel it's really beautiful that E2 took the step and actually like sort of said this is not our problem with the introduction of E1 it just means that there's someone else's problem we do actually have like a pretty good idea of like what that problem is because all we've ever been doing we've been living inside this one specific EE that is people right now so it's kind of like we already know what's going to happen Is that make the new model is based on the blockchain slots in is the iPod still being using in this new model? eBlock is being used for FGA Yeah so you still divide slots in the epochs all validators are called upon to do to attest once per that to both crosslink but also to vote on FG so the cycle of FG is still like large units of slots rather than on the only thing it does is essentially it takes that shuffling of committees and rather than having a set of shards a set of shards a set of shards a different set of shards per slot it's just the same shard of speech slot the rest of the mechanics are highly similar and so actually I'm in an exercise right now essentially just like removing the notion of a crosslink on phase 0 so that phase 0 in terms of work choice in terms of what the validator is doing in terms of voting on is all very much the same Cool we have a lot of questions What limit communication between eEase and eSync? I'll let an eEase call another eEase my main answer to this would be that like then you would have to deal with the eEa level re-entering stuff which just seems like we've been complicated well like if we could do it I guess the the post state of the execution would then be like two state routes well I guess the idea would be like right now like if eEase can't call each other it's more like you have like a then b then c but then here you would have like a and then you might have b and then b just decides that oh we're gonna call a side of here and a decides that we're gonna call b inside of here then b ends and then a ends and then b ends again then you have like c and then b over here like there is benefits but like you can see how this is a potential way of security nightmare Even if we don't have that synchronous could be maybe in the same sharp lock if the we start the same block do the one first and then refer to it the one thing we can do is I think we can kind of allow kind of data free like basically free receipt passing between from one execution to some separate ones I think that's a really good idea Don't do that that you can parallelize So there is actually one very good reason to allow receipt passing between kind of separate executions of any e which is basically that if one updates the marble tree then the other might have witnesses that are old and you might have to update them so it's generally kind of it's something good for sanity to parallelize parallelization Can you explain that Carol? This is like if you have no ability to communicate between different e's then you could just say one thread runs all the a one thread runs all the b one thread runs all the c and then you add at the end that might even be good because that way we would be able to get blocks produced and purified more quickly But then you kind of store your assets in a different e That's true Which means all the e's have to Right, this is true So maybe allowing more communication is good Do you feel you're fighting it? Do you feel you're fighting it? Should ease be open for any we talked about Should be easy open for anyone to deploy or restrict You go from the top Oh, they're up I didn't know I was on no phase 2 and just eating phase 1 So we thought about this I guess I mean One of the big reasons phase 2 was nice is that we have a built-in e market so phase 1 plus minimal e market but if we do that then a minimal e market requires ease to be owned by hard contracts There's two Does this mean the current e's chain keeps running in parallel forever? No, it just means the e's chain becomes a roll of contracts Are you saying to execute it? Yeah, you still need to execute it Even if you have a roll-up chain proposers need to execute it because proposers need to know what the hell am I supposed to include what's actually going to give you fees But if we go roll-up style then cross-link committees would theoretically not need to vote on execution results which could be interesting You could have a setup where voting on execution results is delayed a bit No, that's not good because that hurts like a cross-sharp but why do we even need charts if we can achieve the same amount of data availability throughput in a ham-sharpened system with the ratio-coded data availability groups I guess because that kind of centralizes proposing Well, I think that if you do question 2 then you might as well which you say you have no execution on charts and their charts then you don't need charts Well, I guess when you look at the data availability chain proposals there's been a couple of them recently It's an interesting idea you just go full minimalism on the charts and then you just say the whole thing is supposed to be a roll-up then you basically allow the proposals that I've seen they do allow different people to submit different block routes in parallel and that is shocking I think any system that doesn't bottle that proposing so that one person needs to be aware of everything is charting that definition There's definitely a trade-off between how much you abstract versus the market complexity and that's one of the big things there were a lot of 6 months Do you envision that the execution of them actually happens on the chart client itself or do we need external clients to be maintaining this? Running the code would be done by chart clients because you have to check the state routes doing user-specific activity like maintaining stage and generating witnesses all of that is definitely a good terrain that things like we need to continue to exist for Just the real-life markets and challenges with new markets I mean what you're often point to me about challenges with real-life markets I think the main challenge with the real-life markets is basically that this is like a common opinion and opinion I have like 2-3 months ago is that it's good that we have an optimistic role in the ZK role of happening on this one because that means that there's more people thinking and experimenting about real-life markets and that community seems to be coming to the conclusion that roll-up chains are likely to have monopoly proposers in the argument basically as soon as there's one party that's better at sequencing than others they need to be only slightly better than everyone else and this potentially has censorship risks and so like that means in a layer 2 context you can obviously switch to different systems but in a layer 1 context like you can't and so in a layer 1 context you want more security than that can provide So the next one we kind of talked about this morning and is contentious among some of the designers of these systems should EEs be open for anyone to deploy or restrict it? I'll give you a quick on the range there on the one end you could deploy the system such that EEs can be deployed by anyone with a certain economic model to deploy these probably very expensive to do so let people play with them let standards emerge and just kind of see what happens on the other extreme there are some participants that argue that maybe we should deploy one EE a very good EE that feels like Ethereum and kind of goes a very clear user story and allows people to like build up and begin operating this new charter context and then eventually open up EEs to the deploy and let it people experiment the second proposal was called Dictatorial this morning but at the same time provides like a little bit more guarantee than Dictatorial and Ethereum so it's a to be debated still the second proposal does allow EEs still because you can build EEs in top of this nice base there yes so I guess the question is what it means to have a deploy top level E versus deploying little other EEs or charter contracts so probably that would be one of two big EEs that some big teams are going to develop and so what would be the benefits of not letting anyone experiment with some other in the meantime one kind of distinction has to do with state so one of the remaining roles of the EEs like describing kind of user almost state structure and if we have one and tried EEs and let people build EEs in that EE then like no matter how successful the subsequent EEs are because they depend on the original EEs look whatever the state model of the original EEs that's like a state model like that state is state that basically everyone will need to have whereas if EEs are more parallel with each other then like if one EE calls into this use then kind of fewer people can store in state and and so fewer people like it might become slightly more difficult to do things but other people don't need to be bothered with that data and so a parallel approach at least in the short term it gives us freedom not to come into a particular state structure and to a particular tradeoff between involving things like rent versus storage styles filling persistent states there's also risks in how these EEs are interpreted and used and also with respect to potential bugs someone brought it this morning like from a layer of perspective the only thing that owns EEs are like validators and EEs and then you actually as a user own EEs within the context of the account structure within an EE but if for example there was a very popular EE that exchanges recognized and someone exploited a bug such that the account structure within the EE thought that it had a bunch more EEs even the layer one still thought that it only had 10 EEs in that EE someone's account now that the exchange recognizes it's a valid account might have more EEs so this is kind of like a kind of a bug where wrapped EEs might print wrapped EEs but there's actually not that much ETH backing it so there's like all sorts of complexities on like how this concept is just rolled out to users rolled out to exchanges, rolled out to like the various participants and the various all sorts of other accompanying software that you could go with interfaces so there's just a lot of complexities and a lot of conversations still have around what these things mean so what if if one of the EEs has consented issues in the midst of the network because you cannot get enough signatures right so they are for one VE there's two implementations the EEs would be ridden the EEs would be ridden in one assembly right so there's a like there is no notion if it gets consented around an EE it's just the code that runs so the engine is kind of encoded yeah it gets deployed to the D-contain it's a code that is there's a question right there there's all sorts of questions comment so I understand the risk that if anybody can deploy EEs some may fall into misuse and then you have loads to do the state on the big engine there was a proposal that maybe you lock up ETR to deploy it and you may need to over time you may need to lock up more kind of like you read but if you actually burn them then it's going to stay there forever maybe that would be something more exploring because I really one of the risks you guys bring up is that it's blowed to the big engine like allowing many EEs which may fall into misuse yeah that's possible and probably the biggest lock up model is reasonable as long as other participants can pitch in to lock up and that there's like plenty of time because you wouldn't want a very popular EE to just the other reason you brought up wrapped EEs practically but that already exists on yeah absolutely and there are risks like in exchange having wrapped EEs as good looking as good as EEs in their exchange they're taking on risks right during that I mean if exchange is like going to create a separate token then we should expect E1, E2, E3 but that would be kind of terrible well yeah that would be such a bad company especially given this proposal they'd be the different EEs that are synchronously deferred to each other yes but then if there's a bug in one of the EEs then well that's how it gets driven for like you know maybe tokens should just be out by very few, very simply are there bounds that like I'm sorry I'm just not too aware of this so are there bounds on EEs like so assuming we could deploy our own is it deterministic on like the execution time of the EEs like is there a bound? we're assuming I guess it's the level of gasoline so you could deploy an EE that like say just burns your computation but you don't want to be able to make a block on it if it always cap on the gasoline so so this issue with like EEs potentially accumulating when the beacon chain sounds like you might just want to go to the next level and add like status beacon chain where like the EE comes with would any of the system of assessments be able to compute blocks? well I know but it's more about like we already kind of like we've seen that problem happen before you know with the state where like you can make you can make the idea is you you want these to be very expensive so blocking up bounds state size if there's like a certain amount of block for size 30 and burning also bounds so like we don't explore those even if it was an expensive but just a fee model that would have to be essentially have to have some and it has to be synced by they have to sync this component so we got it back like if we set it so we have to like block or burn any ether per kilobyte and people decide to burn 30 million to create to blow the state by 30 gigabytes that like the e-maximal is going to be really happy question about the resharp how often do we need to resharp so we just have a bunch of this and a whole bunch of this mail so here like these are just cross one communities so they adjust every block will you set their own gas equivalent that's actually more that's a layer one concept it's like the gas against it was an execution of gas limits you have to be 1.559 1.5 and launch I mean I think I don't know just be 0 solid we have all sorts of stuff moving in parallel and it's moving as fast as possible we have 64 charts the ultimately this proposal we believe is a simplifying proposal the M1 that will allow us to get things out faster than previously similarly with these it's a simplifying proposal so moving in terms of the other reason this is a simplification game basically a lot of kind of chart chain logic that we've been kind of wrangling what becomes less needed also a lot of proof of complexity can be kind of reduced down further because well basically you would be able to reduce everything to a single kind of fraud proof or the single fraud proof as it just is importing an entire chart block into the begin chain and that's something that was not possible with cross links because a cross link is just too big to import it to the begin chain the last one and the answer how about this will be the last one what will the transaction throughput be for a chart yeah so I mean by these numbers I estimated 1.3 to 2.7 megabytes for a second of data so if we allow like computation if we assume one transaction is SBA like 100 bytes then that's well 1.3 million divided by 100 to the very 8000 it depends on the construction it depends on the construction it depends on whether or not you're using a roll up it depends on what the gas do depends on if people write in no version of the proposal is we not open up ease I also I just I will just want to speak to the two ranges of it I don't firmly sit in one chance I don't want to go like that very good to me it can be easy enough to have a key for a moment it's going to hold one of the keys it's a one of one multi-sign I would assume in a single EE for a chart there could be a multiple EE for a launch chart so when you make a block you'd probably specify what EE it's for or what set of EE's and that would interpret the block data as such against the what about the cross EE combination chart you can do this likely the asynchronous on the order of a block in this proposal but there's definitely room to discuss intra-EE last question the timing what are the consequences of constant times in that case if I mean like this depending on how we validate this specific type of EE there's no select proposal so we're running to EE's how does that affect constant times yeah so I mean it's a free market so in general like if say one block can have one EE there are very high value EE's with lots of high value transactions those can be picked up more than say an underutilized EE might only have demand to be processed at every handful of blocks but the actual mechanics are there but wouldn't that just mean monopolized completely not necessarily and that was an extreme case where we just have one EE per but if you allow multiple EE's per then likely you might fill in the gaps at the end with an underutilized EE which is an actual last question I mean is it theoretical like could we say that the question was constant times so it's proportional to the amount of time it takes to execute that EE that's not well everyone so a proposal a proposal in ETH 1 gets to kind of bundle things in as they see it it's not a matter of like all the other validators coming out of voting maybe you're not so but again if you're in a very underutilized EE you might have to pay a very high you might have to pay a higher fee to convince somebody to include some of us were hoping that in the previous proposals EE's could have some kind of cash to save on the cost I was just wondering how this new proposal would affect the chance yeah and this proposal definitely kind of it reduces the amount of kind of cross block saving that you can do but like you get some of the benefits back because like the individual blocks get bigger so but the in terms of like having the cash we usually think of EE's as having like a state route 32 bytes per shard but we've discussed the opportunity of like when deploying an EE depending on the amount of capital block burned like you could specify you know a state route per shard or maybe like some larger chunk of data which would not be so that's still kind of up to debate the actual mechanics and economics around how these things deploy but it's something we're definitely looking at I think we're going to call it a rover time and that was great, thanks everyone