 There are some timestamps so if you do have to leave for something you can still come back and know what time you should come back but please like look at those times you know somewhat flexibly maybe come five minutes early compared to those times just to be sure so first we're going to have a lengthy intro on what execution on the 2.0 is and then we we have some explanation on a toolkit we have which is called scout you should be really safe for these first two talks just for sure because that gives you the background of what this is and scout explains to you how you can write any of these environments how you can test those out so if you guys are developing or want to develop anything want to eat 2.0 these are the the bare minimum you should really have a look at them and then we're going to go very deep into a lot of the research we have done on execution environments a big challenge with execution environments is which case you can explain is that they are stateless and therefore we need to transmit all the data to the execution environments we need to perform any kind of operation and a big challenge there is how to efficiently encode the data and we have a couple of different ways to do that so for example the the truth of proof is one way ssd is another ssd is another way and jungle token is is a third way so these three talks could explain these different ways and compare them against each other and we're going to at the end conclude with a testament demo which i hope will be very interesting so case if you want to start up it's your your time to do the first talk please welcome everyone kcd slides please while we get the slides we want to have like a specific q and a slot probably going to have some q and a time after each talk so please you know ask your questions so when serenity the uncertainty around the timeline and the architecture of e2 is driving you a little bit crazy try being a poor dad i'd like the something joe de long said he organized the recent interrupt and the other day on the on the session he said basically when you come to the interrupt you join the cult well if we have a cult maybe this would be the prayer um and we're praying every day that this chaos you know will spring will spring forth some sanity and some serenity so when is serenity i mean coming soon right the sharding is coming um soon since 2015 um i mean there's a detailed blog post since 2015 about casper and serenity and how it would work um and i actually have to go back this far to understand you know where the architecture of uh these two comes from and i think the main point is is there were two bedding cycles uh black ash bedding the state root bedding this is when we had consensus by bat was what it was called that was game theory then turned out well traditional computer science uh is more secure and it's worth this out better so maybe just do something pbft inspired and that became cast for ffg but the main point is that these two game these two processes are fundamentally types of games one is a consensus game the other one is an interactive verification game they're fundamentally different because the consensus game is not deterministic and the interactive verification game is deterministic um this realization inspired the architecture phase one phase two architecture uh phase one is the consensus game and phase two is the verification game and ideally the two would be decoupled as all this was happening the way the the roadmap evolved is is kind of messy but it was really this like this june 2018 pivot was really a hail mary so again we're just praying that this thing is gonna work out as of the dev con last year this was the like launch plan in phase one we would launch phase one there would be 1000 shards every shard would collect lots of data blobs and probably data blobs would be filled with zero bytes um so that's phase one we collect data blobs and phase two we really didn't know early 2019 this question did all the like faces the questions about phase two were still open how will state-run work there will execution be immediate or delayed will there be will all validators execute blocks or will they just you know provide consensus on the data availability of the blocks how will cross out calls work these we didn't have answers to these questions if you hear the the phrase phase one and done what it refers to is an east research post it's not really a proposal at all it was just like some ramblings that i wrote and posted and it you probably never even when i've heard about it unless you are a religious follower of east research uh but metallic responded to it 16 days later so thanks metallic i think that's why it started getting traction as a mean um so the philosophy and approach of phase one and done is basically we start did we just start answering some of these questions so how will state-run work it won't because their own beanie state it's going to be stateless it's like waste of you know i think that's uh commonly accepted now it's going to be immediate execution just like each one that's simpler um will all validators be execute be executors yes and uh how will cross calls work they're going to work great okay don't worry now the philosophy is really uh just like start simple keep execution minimal you know don't try to answer every question just get something basic like working you know a basic prototype so we piggyback on the existing data structures the beacon state data code fields and validator accounts okay that's what ees are um and then you execute the code in the shard blocks you turn the state route that keeps everything stateless uh and so basically in the post i argued that when you have minimal execution you know couple two these phase one uh shard blocks it's a 10x improvement over over if you just have you know phase one as a data availability engine without execution um so again you start simple and easy i like the term phase one done because it's like the execution is coupled to consensus so the old architecture with phase two and phase one decoupled i mean if if the execution is in phase one then just you know just say that don't anyway and you you punt the hard questions so we punt the hard questions about shard calls and you know so forth uh out to the application layer because now if you're implementing this logic uh in wasm you know it's not really in the core protocol is it i mean it's like flexible you can do it in ease whatever ease can do whatever you can implement the light contracts on ease one so so yeah i mean so you put those hard questions where researchers can keep circling whiteboarding and you know in the meantime we can start actually building and prototyping with like you know we have something concrete to work with uh originally i don't know is this word composability but composability uh it sounds like composability it's like sharding is going to break composability this sounds like news from 2015 if you were paying attention this this is a metallics talk from 2015 and devcon one i've just taken the text that was on his slide and put it up here so you can read it okay ugly call that code per second this is like referring to asynchronous shards that's what people mean by breaking composability right that cross-shared calls are going to be asynchronous um so he went he walked through how this might work with an async call an async log an async call back and but then you know in 2015 2018 he has this other post about how you could do it synchronous cross-shared calls so is e2 going to break composability i mean it sounds like it won't if we have synchronous cross-shared calls um oh by the way if you heard there was going to be 1,000 shards actually it might be 64 shards instead um so uh open questions how will cross-shard calls work i don't like people are afraid to say that we don't know okay you know you can ask for directions sometimes we don't know i mean long answer we had approach i've been figured out how will the e1 switchover work so i just learned like a few hours ago um with talik posted a new post on e3 search uh he doesn't call the switchover i was hoping the switchover would uh gain traction since i proposed calling it the switchover but he's calling it the transition uh maybe we should call it the rapture uh but i mean it may never happen okay i mean did like months ago it was really named their skepticism that maybe the e1 chain is going to have to continue on forever and everyone will have to redeploy rewrite all their gaps we don't know um how it's going to work but let's try to prototype it and you know see if we can make it work last week winstown a january 3rd maybe right um also like we don't know what the price of ether is going to be you know next week be surprised how commonly people ask that um and yeah that's it for me uh let's do some q and a what do we know um what do we know well we know it's going to use e wasm everybody want to expand it's really good you put down in the slides there that um for people who want to help there's um a lot of help that's needed so the main help that's needed to basically you say is how would kosher calls work is that the main help that's needed no it's the least i think everywhere just getting more people involved getting more people building eds so in attention to the presentation today and there's there's a lot of proposals for crush our calls right um and so we just have to build the different ones and different eds and try to get more developers involved in this i think you know it's it's going to be really helpful and i think that's that's one of um one of the goals here anybody knows what what any of these face two proposals are anybody knows what the architecture proposed is nobody so i guess it kind of will be hard to continue from there but we were supposed to have some size showing the architecture but it was kind of a short time frame to explain all that it's kind of hard to to go to everything um but regarding what we need help casey you mentioned that there was some of these questions uh which there were answers to meant for many of them there weren't any answers and the same still applies on on phase two the only thing the only things we know right now is just how to execute stuff some other things are still decoupled from this system the exact thing i'm not sure if you want to go back to the slide but the exact thing is who's going to keep the state and who's going to provide the state and that big topic is is called the real air markets that's one of the big topics we definitely need more eyes on and another big topic then is called the fee markets but what all of these means is how do we get the data the stateless contract same and how do we pay each of the parties involved and to we have basically four parties here at least they have the users who want to do transactions and who want to use the system we have another party which we call relayers right now who are basically the providers who store all of the state for you and are able to to package up whatever transaction you stand into something the system can actually use and then these relayers send these package things to the blog proposers who are the ones actually running it on e2.0 and each of these steps people have to be paid so these are kind of big questions and there are many ways to do it and you're experimenting with different ways but before we get into cross shark communication i would suggest maybe these are the things we have to solve first add a little bit so we thought we knew that wasm code would be run in the shark blocks and not in the beacon blocks and then you know about a month ago metallic man and your proposal that was that was like the spec as a phase two proposal two just added they added functionality was oh now we also run wasm code in beacon blocks and so that was like a proposal three and then the proposal to reduce the number of sharks from one thousand two sixty four which i already mentioned in the um east magician session you know a couple days ago and said there will be you know more published on this very soon uh you know that's like the radical new new proposal um but i mean that's kind of like what what we know is just as these proposals are you know rolling out i mean we know a little bit more every time there's a new one what's the what's the reason for that yeah so i mean each of them kind of give their own strength and weaknesses so like the proposal what do we call two and a half three three um provided um some level of load balancing um and some level of scheduling across the shards and and you kind of treat each of the shards this pure computing layer um one that kind of you know came after that um was more has been more geared towards making crash art calls significantly quicker so you can make a crash art call in one one block so if you're calling a contract on another block on another shard you can do that in one blocking so i think each proposal has its tradeoffs and so i think what what you'll realize through kind of what we're talking about today when one of our goals um has been to you know kind of test ground each of these um and and prototype each of these and you know really understand the tradeoffs the gains um on on everything um and yeah as alex said that i think the biggest still to date the biggest question um is who who's providing state how how are they paid to do that and so um we are uh we kind of needed to validate some other things first like uh that these stateless EEs could work in time and that's generally been validated now and so the the next you know the next step is really beginning to validate that question um regarding the state fee market and then after that is is really expanding a lot of the things around crash art transactions um but i from an optimistic perspective there are there is a lot of content and a lot of proposals and a lot of really promising directions for all of these um so uh yeah thanks real i think that is a good segue actually to to the next talk um i just wanted me to reiterate a tiny bit what you said so there what really kick-studied all this work was proposal two which was an actual real thing written down so we could take a look at it and do something but up to that point everything happened just on each research in different discussions um it's easy to discuss stuff based on different texts but it's also kind of hard to to know which direction actually makes sense in practice um so this was a big break with with that kind of mindset we started to take the proposal to as it is blindly implemented it and try to validate certain aspects and that implementation is is what is out so basically we took uh this proposal implemented it and we started out with just the goal of validating those basic assumptions and the basic assumption we had is we have a limit of time we can spend and add every shared block on processing and we also have limits of how much data we can supply to these execution environments and basically had to write some code uh in wasm which takes this data tries to do transactions and fit into the time limit so this is what we have done um and once we got like really good numbers and we solved all these different problems for a couple of months of time then we started to look at the real-life stuff how do these things actually get the data uh and the fee stuff uh we had a long session discussing this on a whiteboard and kind of realized that it's insanely complex and i think that's that's one of the reasons we cannot really just look in isolation with these things we have to do we have to look at them in a more integrated way and i think probably was one of the reason that these new proposals were inspired by so i expect that they're going to be as we look at more of these different problems and not just you know one problem at a time they're going to be more and more proposals and we're going to get closer to the solution so scout itself so this thing is off okay so as i said it's this actual tangible piece of code you can interact with it's not just the text so it implants the proposal too we do plan to take in the changes from different proposals and we also want to come up with our own proposals but we haven't done that yet we were really focusing on on getting those core basic things as for regarding speed um it actually black does it have a pointer doesn't work um so blackbox is most of the the stuff we don't care about and the stuff we don't care about at least that that point of time is whatever is happening on the beacon chain i however do expect that with time as we we get more closer to figuring things out scout going to grow a bit more complex because it has to take in more things from from the different phases but right now it's pretty simple um that's the the original enhancement url you can be a bit more reasoning and and you know how does it work um but the url the the github url is where all the the code is so scout itself there are a couple of different scouts actually are going to go into that but the thing we we call scout is this well okay there was a next cool um so basically scout does a couple of different things uh under the same name so it had it is this tool but it also defines the apis based on the proposal uh it has a testing format and it has a couple of example ease we we're going to go into that in a second um so the original original scout i'm mostly talking about is written in rust and it's it's really small it's saturday at this like 300 nights of code that's that's all what was needed to implement proposal two in a blackwoods manner it uses wasmi which is a wasm interpreter it's on the compiler engine it's it's an interpreter it's written in rust um but the way scout is written you can you can survive two different engines this engine stuff will be important later on um and the ease i wasn't buying codes we covered it a couple of times already but there's this other scout today scout and ts which implies the same api but is written in typescript um and that uses of course nodes so it uses a fast compiler engine uh therefore you can do more stuff in time but a quick question did anybody of you seen yesterday uh he was in 1.0 session with benchmarking yeah so you know from that that we want to use fast interpreters so v8 is not something you would use uh at least soon it's not something you would use so if you use scout a ts to benchmark your ee if you happen to prototyping some ease so the numbers you get there is it's probably something you shouldn't take seriously uh you have to use interpreters to to get the actual numbers uh scout is going to be found there one other interesting point is that with the right rust version uh scout itself is the thing you can execute ease but there's also a library to write ease in rust and you can compile the trust to wasm and with the typescript there's this other language called assembly script uh which looks like typescript but compiles to wasm so in the scout ts repo we have a hello word written in assembly script just highlighting this that you can actually write contracts write ease pardon so write ease in different languages and there's another scout as well called scout one which is written c++ and that's going to use uh webbit the optimized interpreter uh we have so this is the the scout version eventually you want to use to get the final numbers which would reflect what's going on on the network okay so what are ease uh create recap they are basically the contracts in the chart and they only have limited access they they are pure functions that all all they do they get on the input side they get this data log which is which contains the proof uh of the previous state so you take that the proof and you construct the previous state you end up with the same state route and that's the other thing you get as an input that's your state route which is stored on the chart well that's a proposal too anyway because that might change but you get the state route from the chart and then you supply all the proofs to recalculate the same state route and you have to really do that it matches and that means you you have a good starting point and then you also receive in the same data block all the transactions all the things you want to do so you apply those changes you calculate the the final state route the post state route and that's what you add but then that's what's stored so that that's all what ease can do so this is the api uh the api in s wasn't called post function so the just to explain a bit so this e2 stuff here is a namespace and this is the function name and these are the parameters so what i just explained you have a way to get the data you have a way to get the pre and set the post state route the deposit stuff i wouldn't really pay too much attention to right now because that might change quite a bit uh but we also have a debugging api because why don't you want to have a debugging api uh it's an easy way to get some values out of the e doing execution so you can print numbers you can print memory uh you can print memory in a hexadecimal form but this one prints ma'am only prints memory only the printable characters so essentially it's usable for printing strings and we have a big dummy big dummy api so that's that was mentioned yesterday during the benchmarking um it's not fixed at all but that is the discussion url uh to discuss what the big name api should look like this is one of the current proposals it matches what is implemented in the type script in the past version but it may change in the future so regarding the ease we have in in scout itself we have two examples the hello word and bazaar uh i gotta have some screenshots afterwards uh and we have a bunch of other examples but they're not necessarily in the same place uh and these all these examples are going to be all these execution environments going to be covered today so here's an example in rust if anybody's familiar with rust uh as i said you load uh here basically what is hello word is doing you expect that there's no change no inputs so obviously the pre and apostrate would manage um but it's just an example to see how to use the apis and that's the same assembly script and another read language called list which also compares to us this is a longer example this is the the baseline to to all of the work we have done because basically this doesn't do any kind of compression or any sparse multiple trees it just takes in the data hashes it and that's it so basically this is the state which is a list of messages a message contains of just a message in the timestamp a date with block has to supply every single time the state because what you store is the hash of all the messages so obviously you you need all of that at the input side again so this is not for useful it's useful as an example but it wouldn't really scale um so that's why we have all these discussions afterwards and i mentioned that you can interact with sky through a yaml file so this is an example of the yaml file uh you specify what is the code you want to run the pre state the post state and the data and you can also run multiple scripts in the same yaml file it doesn't have to be a poor single one um you're going to see these yaml files in the repos for every single e so the next steps uh i haven't mentioned but uh we have worked with the quill team and they have taken scout to be integrated into lighthouse to do some kind of a testing or assimilation and what we actually are thinking about to to move all of that code back and just have it in scout and have an independent tool to do rapid prototyping again um but we do want to add a few things there maybe multiple engines in a scout one of course and make just the the developer experience better have better tools around that's what we have on scout uh do you guys have hardly on time we have two minutes right so maybe one one or two questions okay go ahead yeah i will repeat yeah so this is not really a question about scout but more about the e's so pre state groups and post state groups sounds like they are assumed to be hashed roots but is it up to the e to use those 32 bytes however it wishes yes the question was that it assumes it's um it just hashes um the state route is just a hash and it's not actually limited to 32 bytes uh it's it's a proposal to to make it longer i think 96 bytes or something but anyway it's it's definitely less than 256 bytes uh what the state route is and it's up to the e to do whatever it wants to do um they were yeah go ahead please uh is there a cfi for scout uh you mean so is there a cfi for scout did you mean writing ee's in c or no no can you use scouts uh from any language that i want to see yeah so scout one would be is written in c++ so that would be the best one to use in in c and uh you know calling it from c and there was one more question back there it's just a quick one with the um the main activity is going to be in the rust implementation of scouts and then she mentioned the c++ one which sounds like it's more for benchmarking and getting the most accurate numbers there um the assembly script one i guess it's that if you want to work with assembly scripts and presumably the the actual rust one is where it is used to even close the scene most of your effort is that fair to say so the question was what is the motivation for each of these different versions of scout um and so distinction just the the languages used to write the contracts in versus what scout is written in because you mentioned the assembly script um so regarding just how the way you run these contracts um it depends on personal preference between the rust and the the typescript version which one do you like more um but on more than a production system we expect that scout one the c++ version is the one which going to be used and the other two are more for prototyping reasons uh one interesting uh fact with the typescript version is that you could use it in a browser because you wouldn't even need an environment um but i think they're really just personal preference which one you want to use for for day-to-day prototyping scout one would be what you should use to validate that your stuff actually going to work and then regarding the contract languages the assembly script versus writing contracts contracts and rust or ease or in c it's again down to personal preference but each of them have different ways to optimize the code to be efficient in one you have better experience with one or the other depending on the use case um and that's that's actually one of the the biggest challenges to to figure out what is the the best way to optimize these high level languages for wasp so say you aren't necessarily encouraging people to focus on a specific one as such just go ahead with what you want to work with right now yeah yeah i would say i think we're we should be moving on the time right so so Sina and Gion from the wasn't team are talking about turbo proof and turbo token i'm sure it will be really interesting so now that we've seen what these are and how we can test them and protect them we're going to go into a sample token and our constraint here is that we want to have a token e that is compatible with if one the if one chain that we know and love the outline of this talk would be uh as follows like first we will go into smpt which is which was our initial prototype of a stateless token it was implemented in last as we'll see um then we'll go into a multi-proof scheme for the local Patricia tree which Gion will explain and then um we'll see turbo token which is another e similar to smpt but it uses the multi-proof scheme and comes off comes with a lot of optimizations so yeah smpt is a stateless token uh compatible with this one and what i mean uh with that is given the same accounts it will have to produce the same state route and we also want to enable users to use that to sign transactions with their e1 private keys uh this initial prototype was uh in Rust and it uses parity libraries in order to maintain user balances uh we stored the key uh the accounts in the leaves of a local Patricia tree that you can see in this diagram uh so the accounts are very similar to each one accounts the same uh here are the leaves uh yeah you can see branch nodes extension nodes and so on uh but note that uh as we all know this is stateless so this uh state is not stored on chain but we rely on third party entities called relayers to store this state and provide uh the service to users uh so users when they want to send the transaction they sign the transaction send it to the relayer which then attaches necessary merkle proofs for the sender and recipient to the transaction packages multiple of these into a block and publishes it to the network um looking from outside the ee would take this block data as well as the pre-state route which is in the short state um and it it will have to produce a post state route and now going inside um of course the the ee will first have to decode the block data and this block data is comprised of a list of transactions um it's very similar to to each one so we have the two address value known signature additionally we have the uh proof for the sender's account and as well as the recipient's account and now for each of the transactions we verify the signature we verify the the merkle proofs check for non-send balances and then update the leaves of the tree for those accounts after every transaction has been processed we can finally compute the post state route uh to evaluate this prototype we simulated 5000 accounts in the state and 70 transactions and run it through scout with the wasmi interpreter and these are the results we got so block size here is 235 kilobytes for the 70 transactions and speed is five seconds out of which four seconds is for signature verification but don't panic just yet we'll get much better results uh very soon by the end of this talk um now let's go back to the the same diagram uh let's say we have a transaction that wants to prove uh these two accounts um as you will notice the notes that i've highlighted in red will be duplicated in both of the proofs which is inefficient so if we could have a multi-proof scheme that includes all the necessary nodes but only ones we would reduce the block size and with a good algorithm we could also gain efficiency in the runtime by batching the verification and Guillaume will now talk about such a scheme well thanks for Sina but he's coming back so just yeah so we'd like to talk about trouble proof that so that used to be called multi-proof but apparently uh everybody started calling their own scheme multi-proof so i had well actually Sina came up with a new scheme name so it has been invented by uh alexei alexei akunov who uh was using it for for a like client's purpose so it's really designed to to make uh rebuilding the tree fast and we want to use it because while we're interested there are several reasons why we want to use it it's really bound to the eth1 miracle patrician tree and also we since there's going to be an eth1 EE integrated in in eth2 uh we we figured it would make sense to to study that so there are three implementations the first one is described by alexei himself uh the other two are the ones Sina and i wrote in typescript and rust and yeah i'm going to explain how the later latter two work uh yes so we start with a tree so it's the same tree and we once imagined you want to make a proof that you know the values of those two those two leaves so you start by selecting the nodes so you if you want those two know those two leaves you this is in red the path that you will have to travel which means that everything that is left will be hashed so this is roughly the the tree that you're going to to send over and then you just get all the nodes and you put them in a depth first order so uh you don't actually need to store the two full nodes here but they're they're here to help uh understand the the representation uh then alexei started using uh wanted to use some kind of virtual machine to start reconstructing the proof so basically the proof reconstruction like how how the instruction on how to rebuild the tree is used uh is encoded as a program so leaf means you have a list of leaves you pick the the leaf at the top of the list extension it just means you take whatever is on the stack and you add it as a node as a sub node of that extension branch and add do the same thing the branch actually creates the branch node and add just says uh you have a branch node on the stack you have a another node on the stack so you have at least two nodes on the stack and you make the the child node the parent of the child of this of this previous branch node and finally a hash there's also a list of hashes so you grab it says you grab the first hashing that list available in that list and you put it on the stack so i'm going to run through a program so this is the program that encodes the the whole structure and formation so you start here you have the stack right now it's empty you have the list of hashes and you have the list of nodes and so what you do is you start with the first instruction it's a leaf you put it on the stack another leaf so now you have two leaves on the stack oh by the way i should have specified the stack rows in that direction then the next instruction is a branch so you grab the the branch node and you take the leaf that was first on the stack and you make make it a child and then after that there's an add so the add takes the the branch plus child and another node is going to make that node the child of of the top node so that's what you get yeah there's a bit of squiggle here but yeah that's generated then there's an extension node here and an extension instruction so this is what happens and finally well finally there's a branch so you create the branch like this and at this point you the next instruction is hash so you're going to take the hash from the list of hashes and put it on the stack so it's independent at the moment and then finally you perform the the last instruction which is the add and it adds the the hash to to the node and by the way there's another mistake here it should be d and c that the hash is coming from yeah so it's it's pretty simple relatively simple it has a lot of inefficiency that you get inherited from from the vertical Patricia tree it chooses rlp it's nibble based so that makes a lot of unnecessary copies a lot of unnecessary breaks in in the key there's a write-up that is ready that is almost ready that it's i'm collecting feedback and going to publish it soon and yes so far to serialize the proof we use rlp there's a protoland who's working on a better way to encode the same tree without the structural information so there's some work in that direction that's really interesting and that i encourage you to to look at when it's when it's published and with this scene i was going to tell us what to do with this turbo proof we've seen how turbo proof works we can go to our latest prototype which was the majority of which was implemented by kc in assembly script this is the language that looks like type script but it's totally different and it compiles to what so turbo token has a bunch of differences to smp3 i won't go into all of them but one of the like bigger changes is that the blood data looks different it's not only a list of transactions anymore we also include some additional information for the ee and we sort them in a way so that the ee could without many memory copies and without many lookups process the transactions and i won't go into details here but you can see how the body looks looks like so to process each transaction we recover the sender's address from the signature we get the accounts for the sender and those cpn with just a simple index page look up from an array check nonce make sure there's enough funds and then we update the accounts the account objects not in the tree but just the account object itself and then rlp encoded and then you might notice that one of the differences here is that we are not doing the proof verification here that we are doing afterwards so after every transaction has been processed we hash all of the addresses to get the list of three keys and then in a single pass we do a couple of things as we are rebuilding the tree similar to how geomexplained we hash the existing leaves to compute the pre-state route at the same time in a different stack we hash the updated accounts to get the post-state route and additionally we also reconstruct the paths of each of these leaves to get to get the keys for those leaves we want to do this because otherwise I could send a transaction with my own address but then in the multi-proof include Vitalik's account which has much more history so this is crucial and notice here again this is done in a single pass and this was inspired by Paul's jungle token then as you saw signature verification was a major bottleneck in smpq which was taking four out of five seconds so to remedy that Casey took web snark which is a library of optimized bosom code for elliptic curve and snark primitives he adapted it to the segp256k1 curve which eth1 uses he also replaced the the big num earthmatics with the big num api and this all resulted in a major runtime improvement another thing that we do is optimize rlp encoding and decoding and so we instead of having a generic encode and decode we have specific encoders and decoders for every data structure this is because for example for a branch node we know how it's going to look like we have much more information so we can encode or decode with a lot less memory copies or overheads and to to evaluate we took the same test case so 5000 accounts 70 transactions and here are the results we got so in smpq as you remember the block size was 235 kilobytes and in turbo token that's 50 kilobytes and for runtime in wasv we had five seconds for for turbo token we ran the ee in optimized rabbit and it took 140 milliseconds out of which 105 milliseconds is for signature verification but kc thinks that that so can go down three times please also note that this is a this is a lower bound because currently turbo token is only limited to updating existing leaves it doesn't add new accounts to the try or remove them and in order to do that we have to adjust the algorithm which will incur some overheads so so what we saw so far all right so i guess we can take questions yes so in that example try it was a pretty small try and i'm curious about the benchmark was that on an unlikely large try or it was with uh 5000 accounts in the state but yeah so so this is yeah this is a pathological as Paul actually called it a example it doesn't represent a real workload that's one of the things that we want to do as a next step but it was also like one of the reasons we chose this test case is because the constraints that we have or we think we have will have for the ee is uh around 50 kilobytes of data in the block size so we set everything up so that the data would roughly fit uh in 50 kilobytes and and for runtime it's likely going to be around 100 milliseconds have you considered or the recent chance that the state instead of being in a patrician dream in the sparse country uh yes and in fact we will we'll see that uh very soon in the next uh talks this is as i said like one of the major constraints that we had for this prototype is that we wanted to remain compatible with this one okay if there's no more questions um so far what we saw was just uh this one compatible token but of course we would want a fully fledged ETH1EE and for that purpose among other things we will need a EVM interpreter in yes did you have like a github link where we can find the source code of this product yeah um yeah sorry yeah um i think there is yeah so this is the last implementation of uh to go proof so uh for for for the EVM interpreter in in bosom uh i invite Hugo to talk about his work does somebody check it at the order of the A.C. Mr. Huckett um i'm going to talk about his uh EVM interpreter execution environment how it works uh a couple of examples and what needs to be done uh this execution environment is written in a subscript and it runs on the it runs on a fork of the scout where i added um some more host functions in order to um process some of the op codes and the way it works it defines an EVM stack in the wasm memory where each element in the stack is a 256 bit um element so it is basically the same EVM one also defines an EVM memory and later it calls these two host functions to tell the scout in which part of the wasm memory the stack the EVM stack and the EVM memory exists then it gets the EVM bytecode and the input data directly from the from the block data and once we have the EVM bytecode it iterates the bytecode and execute uh interpret each one of the of the op codes uh the majority of the op codes are implemented directly in a subscript but there are some other op codes uh for example this one that instead of being implemented directly in a subscript it calls a host function um basically are the the op codes that deals with big num operations and these are a couple of examples this is a very simple EVM bytecode which just um first puts the number one to the to the EVM stack then push again another number one to the EVM stack and then run execute the op code so we have the number two in the stack and then we store this um this number two from the stack to the to the EVM memory and at the end we return this um result which is number two and we write this on the on the post that someone asked before if we can do whatever we want with the no pre-state post date so we can take this um EVM bytecode and put it in a in a scout test case where we put the the bytecode and the expected result here and if we execute this test case we get this result which basically says that confirms that the expected what we expected is is what actually we get for example if for some reason i changed this um this test case and i say that one plus one is equals three the scout will show something like this it will show that there's a difference between the execution and what we expected and this is another example um in this case the EVM bytecode is um generated by the solidity compiler so we are sending this EVM bytecode as well as the input data and again um yeah for for for getting the result of this um of this contract i also run it in in atrium gsvm and the same result that we are getting in atrium gsvm i'm expecting to be the result in the execution of the of the execution of the so we run this um test case and scout confirms that we what we expected is what we are getting so what are the next steps what still needs to be done the first things that need to be done is to actually use the scout uh established capabilities which means that we need at first we need to have a pre-estate route and instead of uh adding the EVM bytecode directly in the block data we need to instead add a list of transactions and a list of proof nodes uh corresponding to the accounts and the contracts affected by those transactions and we also need to to add a um poster route where this reflects the result of the already changes made in those transactions another things that another thing that needs to be done is test with more um more contracts um it could be testing with the with the contracts in the atrium tests in the in the main atrium tests and finally the um what important thing that needs to be done is benchmarking and that's it they would context the upcodes like block yeah is that provided like upcodes like timestamp i'm not considering that idea okay not yet we need to keep testing with all so this is here in this part of the EVM ones the eight ones switch over idea and think about the the plans is to to probably integrate this into turbo token and at that point you would want to have some kind of a compressed form of the entire each one blocks to be transmitted but of course you need the proofs as well but the the block data would would have the timestamp in one of the fields you need exactly this is a goal team i'm going to tell you about sheath another so the name sheath comes from shard ether manager it's a token transfer execution environment just like you saw with Sina and Guillaume and just like you see with Paul after me it's written rust and it provides a few things for you provides an execution environment which is like a web assembly binary that you give to the beacon train it's executed on the shards it gives a coi tool to build uh random transaction packages and the proofs that go along with that transaction package so it can be executed statelessly and it gives some testing and debugging tools because compiling to the web assembly for rust is some of the most ergonomical thing and it helps kind of streamline that process and streamline the process of running your execution environment through skeleton so when i started out with building sheath the design goals for sheath were we were trying to make a token transfer environment that had a really small web assembly binary we wanted to have really small proof sizes to execute on it and we wanted to do that really fast and it turns out that we're still working on those things because they're not as easy as um it's just doing and so right now i'm trying to get us like a really hackable thing and what do i think about hackable i mean it's only like you had a hackathon you go in fork and start swapping out components and start experimenting with what an execution environment is and does um you can put your own logic in it it's hopefully really readable because it should be like fairly idiomatic rust where speed is not incredibly compromised so there's one thing to take from this is to try and fork sheath and and give a go at it so the general architecture what's happening is kind of laid out here like this base layer is sheath and that's kind of just provides some like rails for you to build an execution environment on top of and so each of these like kind of components are things that you can kind of you can swap out and put like whatever you want here and so the base layer is kind of the database that you operate on you're like multi-proof that's Sina and Guillaume talked about and so my implementation is i think as far as multi-proof i am calling it in because it's an in place multi-proof but you can swap that out for a patrician or multi-proof or any other kind of proof format that you like all that it has to do is has to implement a trait that has kind of these functions so you need to be able to figure out what the value and account is what that accounts nonces and you need to be able to add and subtract values and increase the nonce for those accounts and then there's a few types of transfers for actually q-show environments in 2.0 and so the transfer deposit and withdraw the deposit with a drawer like beacon chain related functions that's how you move ether from one chart to the other transfer is how you transfer between accounts but if you want to work this and hack out your own things this is where you can add and swap out and put whatever kind of trans transactions you want as long as you update them in the tx interpreter so what is m well it's uh it's an in place multi-mercle proof and it's kind of born from the simple serializes algorithm for merkelizing very large lists it's uh it uses a sparse merkle tree and it's merkelizes things in the same way that this is c spec defines like merkelization to happen and it's optimized to perform read writes and ruining in place because the original iteration of this execution environment i just used a hash table to um to back all of my uh nodes of my merkel and i found that the execution environment was dominated by men copies and so that's not what we want we want to kind of treat this like an embedded environment to take advantage of that recycle so doing it in a place turned out to be more efficient and so the uh the citation here is for protoland they kind of came up with this method so this is like a github link to his say his repo where he like first kind of describes it and we kind of come back to this but let's do an ssc review how does everyone feel when ssc see good okay so i'm just going to kind of go over like how containers work in ssc because that's what i really how containers and lists work that's what i really care about so for uh on the left side it's kind of like the object that we're going to talk about on the right side is the tree that it represents so an ssd if i want to merkelize an object that has one value by 32 then it's the root of that the merkel root of that uh container is actually exactly the value because it's only 32 bytes in each node in a merkel tree for ssd is 32 bytes uh if we have a container with two elements and these are u128 that's only 16 bytes uh we merkelize it where it's got a tree of of one depth and two children so because there are 16 bytes we theoretically could put them together into one node like we had here um but for like access reasons improving reasons it's better to split it out split the containers out based on um each of the members of the container so for lists you can see that it has the same structure even though now it has three elements so on the right side that says the maximum elements you can have in this list and here it's the same structure as the container with two elements of the same size and the reason is is because here that left node we're packing um all two values in them because they have 32 bytes in total and on the right we only have one value and 16 bytes of padding so now as we go to another container that's a little bit bigger you can kind of see that the number of leaves is equal to uh the next power of two of the number of elements you have in your container so here we have three elements the next power of two is four and we have each node being um the values in the container padded to 16 bytes since their u128 and then the last node here is just a zero padded node so in sheath there's uh the concept of account and this is what the account tree in sheath looks like so every leaf of my sparse merkle tree is that roots for the account and it has this kind of structure so the bottom left two nodes is a byte 48 that's the vls public key um then we've got the nonce the value and then the way that ssd merkleizes things like we saw on the last slide is we've got the padding here uh for the the fourth leaf so if i want to prove an account value then all i need is i just need these three nodes i need that top left node and i need the value node and then the padding node and theoretically we don't really need the padding node since it's just zeros so what does the like state tree for sheath look like or like just a general execution environment well the way that i've gone the route i've gone down is that it's going to be a sparse merkle tree and so i don't know if you can read this but it says list of accounts which was this structure and there's two to the 256 of those accounts um and so like what does that look like uh it's too big film slide so i'll just like kind of leave it as an exercise big dot on your head but if we were to kind of do it it might look something like this where at the very top of the tree that has 256 depth you've got a root and then like the if you take left 256 times you'll get to an account with address zero um if you go right 256 times you get to an account uh root node with uh address of two to the 256 minus one so back to the m4 mat um the m4 mat that lets us do in place merkleization merkle operations uh it's kind of built into two two main portions this left portion here is the offsets and so the offsets is what lets us traverse this hashes object and you can kind of think of this hashes object is like a continuous array of 32 byte values and some of those values are like internal nodes of your merkle tree and some of them are like in nodes that are the actual values we care about like the account balance the nonce etc and so we use the offsets to traverse these hashes um I can run through this algorithm if people have questions on execution environments and rather do some questions I can do that instead so you want to have a preference okay let's run through it quickly and then do questions so let's generate the offsets so right here I say we've got offsets we already know the hashes because we've got the merkle tree we need to figure out the offsets so that we can later traverse it so we can think of a merkle tree of depth three here and we can think of these numbers as like kind of like the general index this is like defining uh which node we're talking about you can start with a top node and that's one and then you can say two three just like kind of in order to reverse all the way down the tree so say that we've got this proof right here we want to say we're trying to prove 12 and the nodes we want we need to provide this like in the stateless multi-proof to prove 12 is 12 13 7 so what we do is we kind of like bring those down and these this is kind of like the order that these hashes will be stored uh in our like contiguous of right of hashes for so now let's generate those there's offsets so the first thing to do is we start like at the very top and it's kind of like this recursive thing we say how many uh nodes is in the left subtree of this like entire this entire tree that we're looking at and well like in that case it's four because we can just like kind of count the number of nodes that we're providing this proof on two three four and so that's like the first number of uh the offsets and so we start continuing to traverse down uh to the left and now that we're at one we say okay how many nodes is in the left subtree from one and it's just a two node there because we're not providing anything below we're only providing that one because that's the only thing we need to prove 12 and so in that case it's one so we come down to two now we're still traversing left first and there's no nodes in that left subtree of two so we can skip over to three and we ask how many nodes is in the three subtree it's just 12 and 13 so we can put two in our offsets uh we come down to six same question how many nodes is in the left subtree one uh go down to 12 well there's no no nodes where to leave and now we come to seven and we do include seven in the proof but it has no children so uh it can kind of be inferred just from the offsets that we've already come up with so this is this is what it like kind of an improved is we've got these offsets that we generated like four two four one two one and the branch that we need to provide to prove uh this general this node at index 12 the branch is not actually the the branch indices it's actually the half the 32 bytes uh it's just easier to like reason about if i just show the branch indices um are we doing on time like timer one way two minutes okay okay so right now this is like the size of so i've told you that there's a there's a tool to generate these whole game proofs and this is the size that i'm getting right now um you can read them but i would say these numbers like aren't super accurate until we do these optimizations so the first optimization is the offsets are represented by u 64s and i'm never going to provide enough hashes to meet all of those bits of significance so i could probably get away with like u 16 or something but the more important one is that right now i'm including zero hashes and if you're familiar with the structure of the sparse wrinkle tree there's a lot of zero hashes especially the lower levels of your tree so until these optimizations are done these numbers are kind of just like waving around in the air so i told you about uh how to create the offsets because these algorithms that you need to do to look out values and and do the virtualization so if you're interested in like like understanding those algorithms you can read proto lambda is uh kind of paper that you publish on github and then i've got two implementations i did one in rust and then i did one in python uh this python one is way more readable than the rust one as most python things really are um if you want to learn more about sheath if you want to fork it and try and hack it the next hackathon or whatever the repository is github.com slash like client slash sheath and i would check out the hacking.md file it's a pretty good guide on like where to get started if you want to like kind of play around with sheath feel free to ping me on twitter i handle those mad at mad understore uh garnet uh and or you just find me uh at the conference i'm happy to like talk to you just more about sheath so thank you does anybody have any questions i don't know if i have any kind of questions okay i have one or two questions yes from here to do more than transfers um people um i i feel like a lot of the the framework is there and i just like i personally don't have time to build like more exciting ones but i've been thinking about it it's like really you just have to like go back to like the very beginning this like slide i don't know if i get there fast up um uh but uh it's really just like replacing these kind of uh like these concepts transfer deposit withdrawal like if i want to do like a bounties contract uh ee it would just be like instead of transfer be create bounty and interpret or interpret some way and then i would change kind of these are like the variables in your celebrity contract that's kind of like the same like uh similarity and so it's possible to do it you can fork it from here and do it it's just you need to spend some time to think about it and actually so it's any aspect if the zero cash can but zero hashes can be fixed and optimized out like someone's adding one bit in some sense so it's any expected savings uh any early numbers um not really but i have come up with like so similar to the way the terminals are done with the op codes i've kind of come up with a way of uh compressing like the offset concept here and i think that that can provide like a good way of removing the zero hashes um and basic ideas like if you're kind of traversing those offsets i think those diagrams are traversed but you traverse like through the same uh the same direction for like many different levels and you should just go to compress that and by compressing that you would know that those are zero hashes yeah so i don't have any numbers on that yet so next up is paul gusanski he's going to talk about his uh stateless burgo tree token it's a little bit more efficient than what i have so far lots of work um it's right it's a one slide presentation by the way for me lots of work um to write ease but i think uh the question is how to make it usable i think that this is the base because it's the stateless uh model and i'll explain this in a moment um so everything is stateless enough too so how many dev developers do we have in the room i'm curious so a few so this is sort of what the what the who this talk is targeted at but there's less than i expected to be here um but for maybe everyone is interested how are we going to do stateless what's the throughputs but you know how does how is this even going to work um you know if we have stateless we're passing so many hashes in with the call data this dominates the call data and what is it going to be like one transaction per second too i mean does this even make sense to begin with and that's what i thought about and that was i i didn't think it was going to work and i i have a little bit more hope now it might work um but we have to be very careful in this this for the dev developers in the room uh this is sort of the basis uh uh these merkle uh uh having states so you want to persist state but you have to you know hash it up somehow we know that there are other accumulators there's art there are rsa maybe some zero knowledge stuff and i think those have big use cases but i think that the merkle uh trees also have big use cases but we need trees that have like depth millions not just like you know thousands we need millions uh in depth so that's the big problem that's the big challenge we need breakthroughs and we need you know efficient merkle trees for to make the f2 the stateless idea even work to begin with so so that was my you know i thought about this for a long time um so the two big things are call data size and runtime and there are tradeoffs between them of course there are speeds size be tradeoffs um but so i'm going to explain uh i wrote everything in c by the way show hands c fans c programming links okay great so there are people maybe they're not interested in writing daps in c but i think c is a reasonable language i think we have to be efficient also uh when we're writing ees everything has to be you know micromanage we like you know embedded systems uh we have limited you know memory limited uh cycles cq cycles so we have to do everything uh we have to listen to the embedded systems people and do everything as efficiently as possible um and there are you know many years of accumulated knowledge in this kind of area so call data size so two so millions four million two to the 22 uh accounts in the state 40 accounts in the witness what's the best uh we can possibly want to be dominated by hashes and signatures uh or whatever uh uh for signatures for the transactions i'm sorry if you're if your dad is using but that's the goal is to be dominated by hashes and then the tree structure you know the theoretical limit is zero percent of your call data is for the tree structure but i i can get it less less than one percent because that's sort of a great thing there's hope uh so you know this is the best we can possibly do is just dominated by hashes and this is for 25 kilobytes so it's close to athuan stuff but you can double it so you can say 80 but it's a little better because um there are two things the first two tricks and then two more tricks that are being worked on and then another sort of few things we need like two orders of magnitude increase and another one and another big trick and another big trick we need a bunch of those two increases to make this even feasible but i think it might be possible um so binary tree is the first thing uh uh why is binary tree more efficient uh uh than the haxary you saw that the tree with the 16 leaves and then these trees have sort of two leaves okay why uh why is binary better than haxary because then you need all the others on the level with the haxary yes so for for sorry to interrupt you for four levels here you need one hash two hash three hash four hashes for the haxary you need 15 hashes so four versus 15 you have a four x improvement in the call data size for using binary so i think binary tree are a good option um i think that's the best we can do okay so what does the tree look like um um so there's a paper in 1990 uh kata jaynen and mac and then uh they defined the children pattern sequence so on each uh node there's a label uh one one means it has left and right child one zero means it only has a left child and there's no right child i interpreted that as no right child as being being a hash so there's a whole you know subtree each one of these hashes is just a whole big subtree down to you know uh depth 22 or 30 or whatever um so that's we don't need that in the witness this is what the witness looks like and uh so the zero one means that there's uh a hash no empty left child so there's a hash and there is a right child so zero on one on you know the first thing is a zero if it's a hash uh or one if it's not a hash the right part is a one if it's if it's not a hash a zero if it's hash so this one has two a left right child so it's one one so it sort of makes sense the one means that there's something there and the zero means that there's a hash there um so that's the that's the the node labels are the tree structure that enclose it and that's so small it's like it's less than one percent um uh then we we don't need these edge labels because they're already given to us and this is sort of building down to this address which is the address of here of this account at the leaf the the consulate is zero zero one zero zero dot dot one and then we have an edge label here with the rest of the address and there's an ellipses that didn't render here so we have to pass call data with node labels which is so tiny edge labels which is still small account data which can be arbitrary you can have uh balance nonce whatever crypto kitty you own whatever um and then hashes and these hashes are h1 h2 uh all of all these hashes these dominate these are like 80 percent and then if you do happen to have signatures or for transactions then that's kind of big too uh the count data you know you might need balances of nonces well the tree the good news is the tree structure we can get less than one percent um and there's there's interactions between these um another thing is the deduplication so the binary is the big thing the deduplication i found that uh 20 percent we saving so we don't have to you know certain hashes uh if we just had one account you would have to pass all the hashes down but we can recompute hashes so so we can compute this hash or end this hash so we don't have to sort of we save with deduplication we save about 20 percent for significant sizes that's what i found with my with my benchmarks so this is a big improvement the binary is a big improvement uh so uh the runtime it will speak about now uh same thing uh 4 million accounts 40 accounts and witness uh 20 milliseconds to to do it but it's mostly hashing and we can improve hashing as well so i think we can improve this runtime so we're going to be dominated this is just for the merkelization not to speak of signature verification and and uh that's the other bottom like we're going to improve that too so i guess the goal might be 100 milliseconds for for us too so the dev developers uh have hope uh here uh that that this sort of tree stuff is going to be uh marginal trivial almost like one percent one percent we're approaching zero that's the important thing this uh we're dominated by things that can be improved so work is being done to improve the hashing speed group uh the the uh transaction you know either either the ec recover you know the sake curve or add 25519 is another option which has some more advantage okay i can i can just argue um so uh what was written here is that i i do the i work like pre-route and post-route uh together so in one pass so i sort of merged the the traversal and it's and it's made like that you saw because the call data is passed like that in uh depth first pre-order because that's how we traverse the tree so the the way you pass the reason it's so fast at runtime there are interactions with calls of data and runtime but they're both sort of we sort of read some sort of optimal point where you know we it's perfect for both this is a great configuration for both uh that uh uh begin traverse you know uh it's like opcodes it's like that it's similar to the opcode model but i i call them node labels and that opcodes and then the traversal is just a recursive call here and then later on once we return the hash to here we do a recursive call here or if this was a hash we would just grab the hash and we know what to do here because we have the one one uh so we know to recursive recursive left and right but if it was a one zero we would recurse left and then when that returns we just grab the hashing we hash up uh there's a bunch of other options i'm using c so i'm just using pointers i'm creating a stack uh as i traverse so i'm putting that the reach the hash that i returned here exactly where i'm going to hash here and then the hash i return here exactly where i need it to hash here so there's minimize mem copies all these little things these micro controller people use embedded system people know these stuff these sort of tricks where you sort of uh minimize mem copy all these things are important now uh with f2 because speed they work desperate for you know runtime and we're desperate for call data so everything has to be perfect uh for the data developers um so i already mentioned call data in traversal order merciless pre-win in the same avoid mem copy so i mentioned that stack there are a few other things i'm not i'm just giving you a high level overview there details that well this is like the easy part but uh there are some details that i spent a lot of time on um so i already have it says next here adaptive hash line so why 256 bit is is that a requirement for everybody nobody's broken hundreds a lot of hundred six i don't think i need significant you know good 160 bit hash has been broken and let's say it is broken i'll say lake to be 160 bit hash is broken what if we have adapted adaptive hashes where we remark lies with you know 176 bit uh you know if you submit a proof of collision then we can you know remark lies on chain uh and then we can go for another 10 years um so that'll improve things a lot by you know 32 bytes versus 20 bytes that's like a what like you know 33 improvement in the call data so all these things you know this 30 percent this 20 percent of this stuff adds up nothing on this caching for us who there's a proposal to uh start alexa taught us he had some blog posts that if you cash the recent uh you cash uh so you maintain the recent hashes that were used uh then you can sort of reuse those and you know it'll save as well so it'll increase you know there'll be a worst case scenario but if the hashes are if the cash has a lot of hashes and they're reused a lot then uh uh we'll have you know this will change there'll be a lot a lot more signatures meaning transactions and what else did i want to say the insert in the lead is written it's ready the touch generation isn't ready yet so how does insert and uh remove happen well we we we find the neighbors we find this let's say i want to insert a neighbor to this one which who's addressed 00 100 that that one one zero one one zero that that one zero that's this uh this sort of cuts off this scroll it so it's the whole thing or yeah that's the whole thing though um so we we instantiate a tree uh as you know pointers to the children but we know we pass this call data this is all done already i just need to ask you a generation uh that uh we instantiate a note here uh then we have left pointer to the hash right pointer to this and then we insert by you know taking the the right pointers we you know this this tree structure we have pointers to the left and right now uh we we do some uh a bit twiddling to to change the snow label or whatever uh and uh we insert the node so it's and then when we mark lies we do we sort of mark lies the subtree that's actually instantiated where we you know instantiate children pointers to children but that's going to be i think it's going to be like you can insert a few nodes or like eight i think for a theorem it's on average they've inserted uh per block eight accounts on average but maybe sometimes more sometimes less so i think it's going to be trivial uh just just because we're doing a little bit of you know update parent update laugh child update you know insert internal note things like this uh but it's it's just going to be you know very close uh it's just going to be like a few things that we call this sort of business logic this sort of fast stuff but it's still going to be dominated by hashing i believe and then the speed up hashing bottleneck and then speed up transaction so i think there's hope that's what i want to say that there's hope for this model the other big thing a huge thing is this 25 kilobytes they're going to they're talking about way more this new proposal i saw uh uh for 64 shards they're talking about increasing call data by a lot i'm they're like you know 128 or even uh there's some 512 512 you know half a mag half a megabyte so this is huge and then this this sort of de duplication is going to be way is going to help us last not going to be only 20 percent because there's going to be so many more accounts so this is uh going to help so you know every cycle is important every you know bite of call data is important that's what i and then you know see the program is high i'm writing the awesome contracts and see so that's it for questions or i should ask questions yes sir uh i was a little confused with what you're dedicating yes um so if you have one child uh you can you have to pass all the witness all the hashes all the you know neighbors neighbor neighbor neighbor neighbor all the way down but uh you don't have to do that because you recompute sometimes uh you can you can recompute them so you don't have to pass it'll be recomputed on sheen anyway so you have to you have to recompute it up this this side anyway so then you don't pass certain hashes at the very very close to the root you you don't pass hashes because they're going to be computed anyway you mean essentially constructing a multi-proof board yes yes i shouldn't just use the word multi-proof but maybe some people don't like that word um and another question since you're calling it uh call data size you're imagining that these come equipped with every transaction and then when they get bunched together in a block you do this deduplicational multi-proof construction of them and then you get the final great question who does this you know merging of these multi-proofs and into one yes they could be done uh we need a hero uh damp writer who will write this merging and this best practices maybe not maybe it'll be similar to this maybe it won't be similar but i don't know how much better you're going to get with me if we're dominated by hashes and dominated by hashes but hopefully there will be some hero that invents uh an ee with all this you know merging and the perfect tree and the perfect everything so yes i don't know but i mean an alternative is that you let the block cruise if there's this yes they'll have a by that i mean though there will be some block producer some relay or infrastructure whatever some fee market infrastructure to handle all this stuff i it's a big question but it's unopinionated the the great thing about f2 is that you know tryptokitties or whoever can can launch and some there will be some hero that will invent the new relay or the new and it's not going to be you know you have to do it this way because the ethereum foundation said so it's going to be you know do it however you want and please teach us because i think that depth developers are much better at doing things than us maybe there was another question actually i've probably lost some context of what's the definition call data it's basically the data that use some proof consists of proof that demonstrate the like waterproof of one account is that what the purpose of this yes the call data is an f1 and right now which you sign some string of bytes that you pass and then uh it's to the very very data to work by post and create status yes so currently the call data doesn't need this proof the call data is just i want to send it i want to send them my transaction to this or my i want to send my crypto kitty to this person uh but with uh with f2 with statelessness uh they have to also in their call data pass this whole tree structure these edge labels these accounts these hashes so the call data is going to be much much huger and so in often the environment yes but the ease will already have the smart contracts will already exist on chain but let's say you want to have some extra bytecode for some custom thing to send your crypto kitty that it'll you know you'll have an interpreter on chain or something then yes you can pass whatever you want it's you do whatever you want with call data you just pass and then invent something amazing like some crypto kitty 2.0 or something and then everyone will get there'll be another bubble and everyone will get rich and i'll retire do we have time for another helps yes so we'll have one time after the the last talks if it's specific to this talk we'll do it all right if you'll have another like 10 minutes okay we'll let's speak now okay um hello um my name is Jared wassinger um i'll just preface this that i'm not an expert on zero knowledge groups or rollo so if i say anything that is extremely inaccurate don't feel free or feel free to correct me but that being said sir yes there's a clicker cool so um what is rollo um it leverages uh zk snark proofs to batch transactions off chain and provide succinct proofs on chain um like many quote unquote l2 solutions uh there are operators and verifiers operators take transactions off chain um and batch them into proofs which are then submitted on chain and verified um so zero knowledge proofs um are constant sized uh verifying constant time uh assuming the yes verifying constant time for a given circuit uh and for roll up this means that proofs can be verified regardless of the number of transactions that are batched um um right and so scalability um is mainly constrained by the on the proving side um and yeah so i think we've had a few existing uh roll up systems that have been advertised to provide like somewhat in the order of one to two magnitudes of a transaction throughput on evm today um right um so why is roll up an interesting application for you to uh well um the data availability requirements are um somewhat lower than say the other on chain um like if we're talking about verifying multi proofs on chain well this is this is similar to to what is happening with roll up but the data availability availability requirements are going to be lower with roll up proofs um there are no exit games lockup periods um basically your funds can only be spent if you if that operator can only spend your funds if they have your private keys so we've been using uh web snark which uh i think geordie this is your tool thank you um uh so handwritten in wasm um and we've been able to generate wasn't generated from handwritten text format but um right so if we look up look at um some benchmarks uh we can actually see that um if you look on the left rust native and this is for a two point pairing check um rust native is actually very comparable to an interpreter uh assuming that we shim in the big num big number of operations as natively uh as host functions um and oh okay thank so i'll just go from left to right um rust native 4.2 milliseconds wobbit um 5.7 milliseconds v8 turbofan 7.5 milliseconds v8 lift up and these are uh uh compilers here uh v8 is uh is going to be 12 milliseconds uh and then if we look at the interpreters wobbit uh shoots up to 236 milliseconds and v8 is at 733 um right so i mean kind of the point of this talk i don't have the numbers currently um because we weren't able to generate the charts in time but uh i can i'll just have have you take my word that rust compile to wasm in general is a lot slower than these optimized uh handwritten uh wasm binaries um and so it kind of illustrates a point that um perhaps e-development is going to be specialized in the sense that dap developers the the set of tool sets for that the the set of skill sets for dap developers versus e-developers is going to be akin to uh protocol developers versus non-protocol developers and it's going to require perhaps a specialized skill set in both maybe cryptography and understanding web assembly to really squeeze as much juice out of these uh execution environment execution engines as possible um to get uh the performance that we want that we want need for e2 um so yeah that um that's my talk uh thanks for coming uh and i think i'm gonna hand it off to will Alex just gonna while i'm studying i'm talking about uh some of these sessions and i'll dive into a quick um 10 minute demo and explain what we're trying to do by simulating crash behaviors already um which we kind of circle back from the inter talk real quick i just wanted to mention these two sessions tomorrow because you are expert at ease now so in the morning at 9 10 at b8 there's this two hours long between 2.0 phase one and two developer experience uh i think it's going to be a really good session so if you are interested in ease and how this is going to work you should show up and there's another one at 12 in b8 so it's a test for as well um we have one hour long minimal execution anyway so basically it's going to be kind of a flexible panel and about the work which was showcased here and um it will also consist of uh e2.0 client developers and some researchers so it's going to be a really good place to ask questions and discuss this more in depth so will please go ahead cool also yeah i'll be giving a quick 10 minute overview over ease and this as well okay so let me start um so i think like just to summarize everything uh we've seen tied into this um i mean what we're seeing from an early perspective is that you know we we can build these ease in a stateless way that's performant enough for e2 that's that's pretty cool and so kind of what we talked about earlier is as you know now now that we have begun to vilify that um we have new things to to validate and so um that would be different models around cross-shard transactions um and also different models around the fee market and the relay market you know someone has to be responsible for setting up these multi-proofs um refreshing the multi-proofs every block as well and uh you know so so we want to be able to make test grounds um by which we can um you know we can show all of that and so that's the goal of the work behind this um so uh in general um the purpose is we want to be able to simulate this system end to end we want to begin building simulations around the um the fee market um and relay network as well um and we want to have um you know you can write an ee and you can you know communicate across shards now so uh that's the goal here um and you know the the system you know should also uh have um core tenants of uh you know the shards should be forking have some ability and that should be configurable um so you can also deal with reworks and um you know respond accordingly um so in this validators are simulated using local keys um so it's not like there's an entry into the test net so it's not like you're gonna run your own um node that can enter into it this is again more of a um a test net that is set up and optimized for you to write an ee around um that mimics multiple shards and shows a beacon chain that's running and that is interacting with these shard chains um also what will be really cool and what i'd like to have by january is real-time deployments of ees so having having this running and you know as a developer you being able to write an ee and uh deploy it with a click and then being able to see how you know how you can interact with that and and the simulation is available um so what it isn't so we're not dealing with networking so we're not trying to benchmark any of the networking side of things um you know it's all simulated so you can't you know start a node and connect to this this network um and we're not trying to go any type of production client um that's not our goal at all so again we just are trying to validate all these things and let people have an early um early foray into this world uh so uh you know the system should be configurable you should say how many shards are running um you know you should set the forability parameters um you know one of the things vatala you know has given uh three different you know proposals over the last um over the last couple months and uh i think you know some of his new proposals are really awesome like they all have trade-offs and there's there's some really cool things and so um if we're able to just test them this uh pretty quickly and make changes that that becomes that becomes really valuable so that's that's kind of the goal um of this and what we're building and we have a basic functioning system right now um and so i'll talk about that and just show you real quick and then talk kind of about the vision behind that um again it provides endpoints to interactive block producer you get the historical data and say these would be like what you would have is normal rvc endpoints um one of the things is this is right now a fork of lighthouse um and uh um so what we did is we used the beacon chain that lighthouse has and i wrote um a shard chain uh and then the shard chain you know interacts with beacon chain um and uh it's right now it's running one shard chain it's um i think after dev con i'm gonna expand it to be able to run you know four or five um and it's fairly trivial i just didn't you don't want to break a bunch of stuff before coming here so that that will be working and that's that's pretty cool um so i'll just kind of show you i showed this demo briefly in my talk on the first day um but uh in this case um what i will first do just clear the screen here um i'm gonna start um the demo so the first thing that we're doing is we're fast forwarding the beacon uh beacon chain uh and we're bringing it to the phase one fork epic so the phase one fork epic is basically um the beacon chains would be running uh alone without the shards once you reach the fork fork epic then the shards are gonna start so now we see that the shards are running and we're simulating three second shard blocks and six second beacon blocks and so in the second you're going to see that there will be a cross link uh cross links that are submitted to the beacon chain uh finality is established and ultimately uh that prunes the fork choice rule for the shard chain as well um on this end i'm about to start a client called uh well it's part of what uh map was talking about is uh she shard either um and this is not only an e but it also um uh is a binary that lets you um that basically builds the proofs that we were talking about so this could be considered an early relayer um or an early state provider um so it keeps its own vision of state locally and it uh yeah it generates the multiproof standard for transactions to establish transfers and and balances so um here we go okay so if i just do a basic transfer this is actually creating a uh multi basically a multiproof for this transaction that's been submitted uh to the block producer on the shard chain right now and so the transfer happened um and so you'll see that a new state group is now um available for that execution environments that operated there and i'm going to show just a little bit of code in a moment um and we can you know look at the balance now as well and see that that was updated so that's really cool so the goal is really now you know all these ease that are being built we can start plugging into this um and uh we we want to kind of change this to where it's no longer just a fork of lighthouse um but we want to actually just pull it in to scout and have this be just an additional tool set and scout that lets you mimic this whole system and um you know we don't we don't need the networking side of things we can you know fairly simplify um a decent amount uh so here you'll see this is the um code so there's a lot of work lots of cleanup needed um lots of things that need to be um added um but again the system is working end to end and what's what's cool is that the uh the hardest thing is not writing state transitions it wasn't actually including the including scour the runtime the hardest part of all the hardest part of all this was uh just getting everything from the fork choice rule the uh the the persistent store everything like that to all all um function and tandem and so um anyways this this should ultimately have to clean up to be pulled into um into scout or initially maybe into the lighthouse repo the actually plugging in uh scout and the runtime into the node was actually risingly simple um and that's that's really cool and so this is really good for the client developers um in this case uh all we really have is in the state transition for a process chart block body um we can just instantiate a runtime right now we're passing the block body again this is an early prototype um this would really just access the uh beacon state and the shard state um and then you have functions within your EE that can uh that can interact with this is uh fairly simple again this will need to be expanded to where you can do interactive deployments of the new EE's um and some of the logic here also needs to be moved back up to the beacon chain but this is a um it's pretty cool this is an early um yeah an early uh system and pretty excited to uh running EE's in a much more realistic environment or real world environment so fantastic yeah so two questions so why is this is also certainly uh the open economies like uh uh the incentive for the people for some of the user and also like for state builders or these gotcha yeah so that's that's something that I'm gonna start doing uh after DevCon actually so I want to start uh so Quolk has grown we have a really awesome team and really awesome uh and engineers um and so I am uh going to probably start migrating and myself I'll start working on building simulations that will plug into this from the state provider network um and the relay network so I'm gonna start doing some early research on that since we really need to validate that um as far as what you were saying the token economy and pane of block producer um so again it depends on which proposal we go with so the tall because anyone that kind of enshrines uh EE into the shards um if we go in that direction this makes uh my work a lot easier and I'm actually fairly excited about that one but again each proposal has pros and trade-offs so it's ultimately seeing like what do developers you know in this ecosystem want and I think we'll probably dive into the into some of the meat of that tomorrow in these sessions as well um I don't know if I that like answered your question but in in in the system where short um either is more enshrined um it becomes more of a system of just state providers how do you pay someone to provide state um if either is less enshrined um then you need a whole system it becomes a little bit more complex because the block producers uh need to um basically uh need to have some level of trust with the relayers and and there's some complexity there whereas in in this new model that he's proposed they don't they don't and uh and so um yeah because it's a higher level overview without studying 10 minutes talking about it yeah because like the following question is the because this token incentive also have changed a lot of behaviors like state providers and corresponding validators so in in such case how like maybe the simulator can help to really explain those common behaviors on these interactions yeah that's the goal and then have it have it be um have some actual but right now it's not well defined right it's not well defined yet yeah and so I think we need um we need just you know a little time first to figure out what proposal we're going to go with um and maybe some of the some of the direction of the research that you know I'll dive into and also John Abler is going to be diving into as well uh John raise your hand yeah um regarding this I think you know it can also help fuel that direction um yeah is anyone interested in relay markets and fee markets and like getting involved in that area if you are like too free to like put your hand up yeah okay awesome cool let's let's talk yeah let's see uh any other questions yes um I guess I'm just being allowed here but is it feasible in a way that like say we can open up an area in the memory say this area is going to be there for the sake of this entire block of transactions execution so that let's say when you do like stainless things um like maybe there are uh mobile transactions touching stuff in the same EE and then they all try to construct a tree but then the previous transaction has already constructed a part of the tree and then the said the following one is constructed another part of the tree and also because uh you know and there's also part of date in the same tree right then like then conflicting transactions in the same block can go together because you know the following one sees oh it's right up data someone constructed I'm just going to use it instead of reconstruct the proof kind of thing what's that I've used like some ideal partial state or casual state yeah I know so like that's what yeah did you take you have a caching API under consideration and I think you guys did some tests with it but we have a caching API and I think we had some tests with it but it's not conclusive any other questions no more questions bye thanks all