 Okay Okay for real this time final final announcement coffee will be here at 230 All right, everybody listen up. Alexi is doing his presentation now. Okay, so thank you very much Hudson So I'm going to just first go through the old and new materials so that in the later iterations We're gonna just repeat what we said before but with the modifications So I want to introduce to the the recent So state rent to proposals, which is the iteration of the from the first one and It's based on various things is on the first one of some the on a framework it based on some discussions with lots of people and some proof of concept which has been done by Adrienne from Pegasus team from some fund log proposals and stuff like that So main differences are if you if you saw seen the first proposal it had this thing called a linear cross-contract storage This has now been removed because we are currently believe that we can emulate it by using the new Create two op code which supposed to happen in Constantinople and I also published the The example of the smart contract which implements that approach for our ERC 20 tokens And so then the prior to Q for eviction has been removed So the main reason why it was there in the first place is because two or not so my Idea was not to give miners any extra powers More than they have at the moment. So one of the power we do give them here is the to control the eviction so But I got myself convinced that we we it's okay to give them that power because of the Sort of censorship resistance of Ethereum, which was discovered in 2016 after the attempted soft fork on after the Dow attack So then another Another changes that we are introducing the calculation of the storage sites before and not after not during the introduction of rent and so then there's a lockups which I will look into further So and then we only discuss in temporal protection here And the rent price is fixed in this proposal and not not floating as in one before But we can introduce the floating one if it's really necessary So this proposal is also organized the previous one had the six steps But this one is more of the dependency graph and so this diagram is essentially outlining what these Changes could be from a to M and there's dependencies Explained here. So the the solid line shows what has to happen into distinct hard forks and The dash lines is something which could happen in one hard fork So we just discussed it in in a breakout session like what could be potential Potential division into the forks so like how quickly can we do this? And so if we really really push it so in the first for it you got obviously two easy changes and then you got up to the Rent on the counts and eviction of dust. So after the second hard fork, we would see some improvements in the state size by Getting the dust account removed the dust account removed. We also introduced the state lockups You see on the bottom the the change e which means that we start penalizing the expansion of the storage and so in a third hard fork we introduce Rent on the storage on contract storage and also eviction and recovery of contracts So now I present the eviction and recovery contracts together so that because there was a lot of questions about What happens to the contracts which were removed but by mistake and so here they this comes as a single package And then the the KL and M of the like those little leaves which are the sort of auxiliary functions to To smooth out some rough edges of the of this whole thing So let's see that so just so quickly remind you why we need to play protection so in the one of the changes is essentially eviction of the dust account by dust account we mean that the The non-contract account which has zero balance And so it's if we remove those things Then the problem is that when this account gets recreated by sending some ease to it Then the nonce gets reset to zero and previously valid transactions still become valid So this this change has been proposed by by Martin from Go with go Ethereum team team So it basically means that we're adding an optional field into transaction Which is called valid until and so the users will be in charge of protecting themselves. So this transaction Will only last for like three something like minutes or whatever the units are and so in the first Change it's in the change a it's optional So that gives the the ecosystem the time to get used to this change implemented and things like this so in the change be it becomes mandatory and So that everybody has to start thinking about what they're going to put in there They can put infinite amount if they want to all behavior, but Essentially they will be forced to choose the value and if they choose the value such that the validity is before The eviction then they will be protected from replay. So the there was other proposals Which were based on the change in the nonce after the account is recreated There are some advantages and disadvantages of this but for the concreteness. I just left this proposal in in here So change C is the so what I call the net contract size accounting so for the things like Lockups and for the rent we require to have access to the accurate Number of storage items which exists within the contract which we don't have now in the in the protocol so introduction of such account needs to be done in two stages because they Because of the blockchain keeps moving we cannot simply introduce it from certain blocks So we first introduce in the net accounting where each store now starts Increasing the counter when the the item is allocated and decreasing the counter when it's Deallocated And then in this and so this year we introduced something like a huge number So the the reason why we need this huge number first of all We we don't want to have a storage size as a the signed integer We wanted unsigned, but the second reason is that later on we want to distinguish between the contract where the storage size hasn't been even Introduced because the contract who hasn't been modified and the cases where it has been modified but then the the the the accurate Contract size has been introduced. So I think I'm not going to go into the much details of here It's just I will show you what the contract what the change D means So when we go from a net to gross accounting of the size so essentially Because we split into two changes So in in a block C we start net accounting and then we know that Everything after block C is now accounted So all the only thing we need to do to get the accurate count is to take the sizes at the block C and This is what we can we can then do it because then we can compute it offline and in We included in every client implementation What was the size of each contract at the time C and then by by adding these two together the net Count and the gross count at the time C. We get an accurate account dynamically and the huge number here is Is used to as I said to to be able to use signed unsigned integers also to distinguish different cases and The important bit here is about observable storage size, which will be used later on so when we So we try to not introduce any more Transaction churn here. So we are we do not increase number of modifications of the state simply for the sake of the accounting So the so the storage size is only Introduced or changed when there's other reasons to modify the account So that's why we need this notion of what what is the observed value? Because if there is an account if there is a contract which never changed after the block C We still want to see the correct size of it So yes, this is the observability rules which also include this huge number So in change e we introduced the storage lockups. So the idea of the lockups is a We wouldn't introduce it if we were starting from scratch if we didn't have any existing contracts If we simply had zero nothing like in Ethereum 2.0, we probably would just simply introduce rent But because we have some contracts in the state that some people might want to keep using Without necessarily rewriting them completely. So we give them that option Although I personally think they would still rewrite them anyway, but this is actually also applies to recovery I think that these three features we only need there to give people the option But but given the the cost they will probably decide to start from scratch anyway So what the lockups do essentially? I know that Andre doesn't like this analogy But imagine that each contract is the is kind of like a glass Wait one item is a is like one centimeter long And so when you're so the size of that glass the height of this glass is the is that is the storage size and then What you do is that you can increase that size or decrease it by freeing storage or increasing the storage But you also have to pour some water in it in it So pouring Increasing the size of the glass requires the pouring a little bit of water like to fill it up So for the new contracts, it means that whenever you Add to the contract storage, but you always have to fill it up So you basically have to keep it full you keep this glass always full So when you reduce the the size of the glass the excess water gets back to you So you can release those funds. Imagine that the liquid is actually the funds and But but in the case where you had the accounts which were full previously So let's say after we introduced the lockups we had some accounts which were basically like empty glasses Before so what would you gonna do with those? Well, we were introducing the rules where so when somebody changes the value inside Even without changing the size they still have to contribute a little bit of liquid in it So eventually if the contract is used a lot then they will eventually fill it up and after that It will behave as just as the other the new contract But if nobody cares about that account So if nobody cares about that contract and they just nobody fills it up It will be eaten by it will be eaten by the rent and will be reclaimed very quickly So you might if anybody read the proposal EIP 1280 3 Which was the supposed to be included in the Constantinople, but it wasn't it was actually very similar Semantics of this change was very similar to what I'm going to describe and here the semantic was a store depends on the The three values original current and a new or value So original is the is the value of the storage item before the trans there's a transaction happened Current is the the value of the storage item Before this a store operation happened and value is what we're trying to set and without the loss of generality We can only think about zero one and two where one and two are any values Which are kind of distinct from each other in in this case you can think about there's only 27 possible Combinations of these things and you can describe for all the 27 Combinations what the semantics should be and then you will be sure that you completely describe it, but actually it's even easier than that So when we describe this state transition, we were just thinking about four different states Something I call ground state we can like a green thing is when we Allocated new storage so we went from zero to one or two We can have a removed storage item where we go from one and two to zero or we can have something which just changed the value so which is the orange one and So the so each state transition is essentially going to be the arrow that goes from one of these circles to another circle So for example, we go from the ground state to the green state, which means that we are located the new value or And so how we decide where the the arrow starts and where it ends. It's very simple table here so the Place where the arrow starts only big depends on two things on original and current and then using this table you can figure it out and In the same exact table, but you're looking at two values of original and new value You can figure out where the value the weather the arrow points to and so for example These are the the three examples of original current and value and what are the state transitions? so so what I'm trying to say here is it's very easy to describe and You don't actually need to seven to twenty seven a possible variations, but much fewer than that because so here you've got 99 I think 18 Or something and so here as I just described you because with the analogy of the glass We were to fill with the water So you either have a glass which is already full and this is the case number two and The case where the glass isn't half empty Which is the case number one and the reason why it's important is that for the For the half empty glass where the contract existed before the lockups. It's still been filled up So the the current users have to pay For the preview for the previous users who didn't need did not need to do the lockups Essentially, that's as the cost you pay for You know, like if the contract is really important and the current users are prepared to do that Then it's fine and we just let it go and then if the contract is new Or it's already been filled up by the users then it behaves this way So whenever you go you create a new item it releases some of the some of the ether to the TX origin where you go the green spot or if you were Sorry with the red spot or where you go to the green spot where you allocate the item You have to lock something up But in the case of the half full glass the semantics is a bit different So you were for example when you free up the item, it doesn't give you back the ether It just keeps it so it's kind of more greedy and then when you change the value or Other new value it will take ether from you anyway. So even if you're you're basically changing somebody else's value so this particular thing removes the the the possibility of Dust attack dust token attacks for example because it's So when after lock up is introduced that it does there's no point of Adding the the storage to anybody's contract and I'll explain you why later on So here's the example of let's say that we are in the glass in a half full glass and There was original equals one current equals to valley equals one And so we can figure out that the magician and looking at the The rules before we can figure out what the semantics should be in this case So and what I want to say here is that You might have noticed that I didn't try to piggyback on the gas here Because that's probably one of the intuitively one of the first thing you want to do is like oh well Can't you just use the gas mechanisms for that? Well, the problem is that the gas behaves differently from lockups So when the transactions reverts the gas Still gets spent it doesn't get refunded, but With the lockups and with the releases they need to be reverted when the the transaction is reverted right For example, if you you don't want to release ether if transaction is reverted because you didn't actually Free the storage and in the same in the same way that if you Created a new storage and then you didn't because it was reverted. You shouldn't really take the ether from from the person But it introduces some safety problem. I think because now it might The amount of ether that will be deducted from TX origin is only limited by Potential number of S stores it can do in a transaction and these balance So we might need to introduce a new field in the in a transaction to say like this is how the maximum lockups I am prepared to do I haven't put in the proposals yet because this This kind of was a bit late late thought So then proposal number so then F is a fixed trend on accounts. So In the previous version of state-run proposal, we were only introducing it on the non-contract account But here I'm introducing it for all both contracts and the non-contract accounts. Just rent Important bit is no eviction here. Just rent. So I separated eviction from the rent in all cases because So what the rent allows you to do is simply to reduce the balance but doesn't Doesn't decide whether the account will be removed or not and so here in order to to to Support us we need two constants account rent is how much you charge for one account per block and the code rent Is how much you charge per one byte of code per block and this could be this would be different values because the code? is a kind of more stable and it probably It probably doesn't cause as much performance Issues as the as the account itself and obviously pre-compose are exempt for Some obvious reasons So this is how it works. So this is how the so whenever the account any account gets modified It recalculates the rent balance and the rent block and potentially also reduces the balance But this particular the operation does not evict. So this is only the reducing the Potentially reducing the balance or rent balance and another important piece here is that the rent balance could become negative Right as a result of this. So if there's no more balance left You just go basically start accumulating negative rent balance, but because this change doesn't include eviction You know, it's just gonna be rent negative rent balance So then during the proof of concept implementation Adrienne figured out that there something needs to be clarified where exactly the calculation of rent happens And then I looked at the yellow paper and then it states that during the block finalization there's these four things happening and logically I concluded that the rent calculation should happen before between three and four and because in after four it's too late because this is where we Recalculate the state route and by that we need to finish all the modifications and Before the three it's too early because the potential rewards can prevent you can change the the the rent calculations So so basically have to do it before three so after three but before four So then after we've done with rent so we can introduce the eviction so here we only in this particular change we only evict Non-contracts and again, this has been asked by the proof of concept to clarify. What do you mean by? Non-contract accounts and see here. I'm clarifying. This is the specific code hash And if the code hash is equals that and the balance equals zero way Then it's it's it's deemed to be dust account and will be evicted as a result of this change And this is a how it happens So the eviction check is performed at the end of transaction For all the accounts that were touched during this transaction not necessarily modified by touched by touching means that you write Their balance or you try to send zero ether to it. So then this account is touched So in the end of the transaction you have this loop which goes through all touched account and Figures out whether they need to be evicted. So if the if the account is not going to be evicted then it's not modified So this this is why? In this diagram, we don't modify any Actually I need to fix this because I think this this implies that the rent balance and rent block is modified But actually I need to agree That's a good point. So essentially the thing is that it does not introduce the change unless the count is it gets evicted So then change H is where we Start charging the rent for the storage and as I so now you would understand why I had needed lockups for the existing accounts because The rent is actually charged not on the entire storage size, but on the difference between storage size and lockups So in it means that if the lockups equals storage size, then there's no rent on storage The only rent will be charged in these cases for actual account and the code So which means that any new account new contracts created after lockups will pay constant constant rent and But everywhere if all the empty Contracts which existed before and nobody cares about they will pay a rent on the full storage size and they will be evicted They're pretty quickly But it is possible if somebody really cares about their contracts They can fill them up with their easter and prevent them from being kind of very quickly decayed and Again, we need to introduce the third constant here for the storage So yeah, it changes the formula About how to calculate the rent due and this also come from a proof of concept that we need to Specify here that the value of storage size look out lockups and code size need to be taken at the beginning of the Current block because otherwise You will be overcharging or undercharging So because because basically the calculation of rent only happens when the account is modified So it could be like for last hundred blocks. It has not been modified So now it's finally modified and you need to calculate the rent You shouldn't be calculating it on something which is currently there So you need to say what was the the state of the beginning of the block and this is going to be the Determining the charge for the last hundred blocks and This is where I Referred to the notion as observability of which described in change D So now we come into the eviction and recovery of the contracts So this was basically copy pasted from the previous proposal. It doesn't really change a lot So the only thing which came up with the proof of concept is that we need to clearly define how distinguish Hush stubs from the contracts themselves. It's possible to do but it needs to I haven't clarified it yet But but it's possible to do so that's what I was going to say and this is the graphical representation of how the restoration works and so the main idea here is that So when you plan to restore the Your account so your contract for example, so you multi-sig that the sudden that accidentally got evicted so you need to Figure out what was its storage like at the time of eviction Recreate at the same exact storage in the new contract Because you can code up the new contract the way that you simply accept the storage item from a certain account, so you recreate the storage items Then you create the second contract which will contain exactly the same code as the is the one that you need So essentially you need a new two new contracts. One of them will contain the exactly same storage as that your Revicted contract had the second one will have exactly the same code that you're evicted a contract had and then you call this thing called restore to and it basically merges them together and Restores your contract at the same address as it used to be so the reason why it has to be this way Because because you're your your contract which is evicted could be quite large It wouldn't be possible to push all the storage in one transaction So you might need to be able to do it over a course of many transactions so that then you can do it in one go There are some edge cases here, but for example this particular opcode would require Calculation of storage storage route within the transactions, which we haven't done before So then this is the the coffee thing. So I mean it's also copy-paste from the previous proposals is this is for the potential library contracts which will So if nobody is looking after them and they want to charge for their own existence So they can introduce this coffee Which is if it's a popular library that everybody is calling they might be able to kind of Immortalize themselves if they charge a small amount of Easter for each To each caller and that the important bit is that this charge doesn't go to the balance But to the rent balance so it cannot be recovered. It can only be used to prolong the life of this contract and and So eventually you can This contract can collect enough rent to last for a hundred years or something like that And then everybody will be sure that this is going to be there forever practically and Then at the last one of actually the second last is that if you have a If you have a contract which doesn't have any way of accepting Easter But you still wanted to keep it around so you can add the Easter directly to their rent balance rather than to the balance By using this new opcode And the last one big after we've introduced the lockups the lockups basically give much more straightforward and better reward for the clearing the storage and and if we also So basically we can remove the storage refunds at all And if you if you decide to go even further and say remove the refund for the for the self-destruct Then we can just completely remove the concept of refund altogether Which is to simplify the protocol and that's it. That's the end of it. Thank you very much All right, do we have another group that wants to go next? All right, grow it ahead If you need help getting set up just holler Hey Just want to hey, okay Can you guys see me So just a quick rundown of what what we're working on Code name is the gala. We're building a light client for primarily browsers and and environments we reduced networking capabilities and resources in general so It's built around this concept of slices which are sub-crease of the Merkel state storage and and Storage as well in a state then we have a the we call the kids a net which is a network of flight clients that Seed that state they also traditional like lines right now at least they they connect to a Note and download what they need but our idea is to actually Seed that state across the the the light lines themselves It relies on on full notes to provide access to the state so there's gonna be some RBC goals to retrieve state So what are slices again? They're they're Merkel sub-crease They consist of some parts I can show it in a minute, but basically it's The three notes the branch notes and the leaves which are the counts. There's the avium code in there as well and and the stem and the cool thing is that We can identify them by we call the stamp hat and the depth which is basically the four labels of a key in the in the Merkel pre-shoot ring and Then the depth identifies how how large that that chunk is going to be and We can also use the state or the storage route to uniquely identify a slice so we can we can use Just the stem path and the depth short of the four labels and like the depth could be anything 10 for example Oh, sure The Yeah, so so we can identify the the the slice by the stamp at and the depth which basically allows us to group it group it and then if you have something like a Pops up or multicast or something like that you can use those as identifiers to to create subscriptions and then you can Near real-time basically propagate changes to to some subset of those of those slices the kids in that network is The like line network itself It's again P2P notes. They seed the state across across this this client Build with lip to pee I'm sure that people know what it is But it basically allows you to run peer-to-peer or Some some semblance of peer-to-peer in a browser Again, we're using the pops up for for near real-time data propagation Right now. There's Flots up which is what's available on on lipid to be but work is being done on creating something a little bit more performant and Yeah, so Again, I kind of touched on that already Data propagation is Being built on top of of pops up Slices can be identified and can can be grouped. They can be created as topics Note subscribe to those topics and they get updates as soon as the new block is generated There's a slice being extracted and propagated for the network a Client is only interested on us upset of this of the slices which are they can be based on user account the Some of the daps and the tokens that are being interacted with and then we can also take advantage of a large amount of this This clients in the network and basically say well, you know if a client can dedicate 20 30 50 megabytes To store a portion of the of the miracle state Then if we have a million clients then we can begin provide, you know several fold redundancy in the network so And we basically arbitrarily assign slices to clients based on their on their ID and some notion of distance so security is Basically, it's based around I was the proof of work. We get a header. We get can verify the seal we can As long as we have we can base it around some some notion of checkpoint and if the For example, we can hack through the checkpoint in the client itself and distribute it update clients every every now and then and updated Check when the basically we we can Using this checkpoint we can we can guarantee that the slices where we're extracting are are correct because they're based on the state or the storage route So they're basically timestamp And yeah, so it's basically Slices Which are sub-trees the kids in that network peer-to-peer notes that Currently is pops up as a protocol to distribute the slices and full notes that are being that are being added that the methods in order to Recreate this and that's pretty much it so We're very interested in knowing what are the changes That are coming to to state management pruning etc etc because obviously it's gonna it's gonna affect us any questions Thank you so much Anyone want to go next presentations? That's a good idea. Is that Cooley wasm team if we do like a Q&A? Yeah, so we have a couple microphones that I'll grab We'll start you'll be the first question. I actually have um, I Have a presentation to go with it So I hope you don't mind if we use it to get some visual material with it Because some of my questions are based on that so we can just keep the slide on I know I just gonna use this one, but but I just wanted to have something on the screen So we're I mean I hope we're gonna my questions will be along these four lines So I'm gonna read out for you so when I was trying to describe the Iwasm group to other people and Obviously, I was a bit short of time and I just put up I Called them an undisolved questions, right? And there's four of them gas metering memory allocation interaction with the rest of the EVM state and interpreter compiler guarantees and So what I would like to do is that I would like to just go through Four of them and and ask what is your current state opinions and what are the potential ways of doing it? So let's start with the gas metering. I'm not gonna read out of the slide But I just give you to comment on that and maybe Describe what you're thinking of you've got microphone. Okay Maybe the slide just tell me if this is incorrect or anything like that Right. Yeah So I would explain to the audience so that as far as and understand from a last time The there were two different methods of doing gas metering In the wasm in the first is injection and the second is the upper bound estimation so injection means that you're adding some extra register in the code and It gets incremented by jumps and calls and then there's out of gas check And then there's an automatic upper bound is that you're doing some static analysis. So is it the correct description? Yes, there should be a third one. What is the third one? Exact not upper bound, but exact calculations. So you have no matter what the input the gas calculation is a function of the input in the current state That might be intractable for a lot of contracts, but it might be reasonable for pre-compiles But yes, the whole question of metering is a very interesting one. Obviously pre-compiles and wasm contracts have to be metered You have to pay gas for them currently. It's benchmark based and Yes, the injection way. So this is one level currently like EVM Executes opcodes and each opcode uses some gas We have an optimization. I don't know if it's the best one But at each basic block, which means a block of opcodes That will there are guaranteed to execute in sequence. There's no branches into or out of that block Before it we inject some code that says, you know, we count the gas used by that block And then we inject some code into the web assembly to use that amount of gas at the beginning of the block That's what injection means and The automatic upper bound My understanding of it. Maybe you're you're my interpretation of it is a little different Means we you give us an e-wasm contract and we give you an upper bound of the gas use viper has something like this already But viper is in a Turing machine and this is sort of a halting problem We don't even know if you if you give me an e-wasm contract I have no way to tell you is it gonna halt it. Yeah, so it only is that's why I said for the The con the cons is only subset of codes will be you will be able to do this estimation Yeah, so we would have to restrict things and the other one would be even worse because the number of paths through a Given contract can just branch, you know exponentially. So so only for certain contracts. We can do it So but it's hopeless in the generic case We would like to do the upper bound estimation for pre-compiles But there's a problem because the input of hash for example hash functions can be arbitrarily long Well, obviously limited by gas or whatever So it's not an upper bound, you know The upper bound of the max we don't want to charge it for every once we want to make it a function of of the input size Yes, this is is there a specific question or are you asking us is no just asking about what is your current preferred preferred path that you're exploring It's just kind of revealed to us What is the what do you think you're gonna be doing for pre-compiles in the in a phase one? the current prototype is Not a program like this is what we want to have but we're still working on it So the current situation is to just do what the regulatory wasn't pre-compiles are so you have a fixed Fixed gas like you get charged to fix the mount and then depending on the input size you add something, but it's really arbitrary Okay, so essentially doing the same sort of work that was done for the compiles and Byzantium Yeah, so I add a Pro to number two the upper bound estimation is then the gas rule is simple enough that Clients can implement the pre-compiles natively and if With the first option the gas rule is very complex and it basically would make it impossible for clients to implement the pre-compile natively so they would have to import a web assembly engine but If we can extract an upper bound for the gas rules then it Gives clients the option to implement them natively like like a past pre-compiles like the existing pre-compiles Anybody else to have a question about this from the audience or because I've got another three questions I thought I would share an observation. I had these guys are the e-wasm experts I just have a lot of background and processors and I say so I thought I would join that group today and I thought a useful point that I learned from that in maybe it's obvious to everybody here But I thought I would just say it in case it wasn't that The phase one when they say pre-compiles I thought that there might have been some implication of the Wasm system being you know required to be embedded in client flows as a result of them being part of pre-compiles But the way to think about phase one. I believe I should just say here is that like right now pre-compiles People specify the algorithm in a pre-compile in a fairly generic way with no requirement from an EIP standpoint of like How you specify what the what is being what is inside of that pre-compile contract? So phase one in a sense one of the things that we should think about as a community is you know phase one is Not requiring Geth or parody or anybody else to actually implement any given wasm compiler Phase one is about actually saying any new pre-compiles Must include as part of their EIP process that they will have a full wasm implementation and That full wasm implementation will allow a gas mechanism to actually measure the computational cost so that it's calculated automatically by the Community rather than being chosen arbitrarily and it will allow there to start being the idea that Clients could actually people who are creating clients don't have to actually use any wasm whatsoever They can from first principles just look at the wasm code like they look at a formal Description of an algorithm and then recoded in go or recoded in rust or Java or whatever Or they could try to use some wasm system to actually help them with the compiling effort And so that's maybe helpful in this gas metering to understand this first phase is really about Specifying an algorithm that people can understand to you know have that gas being picked automatically Anybody that a helpful comment. I won't make others like that if that's not Anybody else have a question? Yeah Or we can we can return to that if I like I just wanted to go through the second one How'd you go to the second? Oh So the memory allocation So when I read the wasm sort of not specification actually I just read the some docs about wasm when I noticed that It specifies that there's got the so wasm Has a linear memory and at the moment in the in the first version. There's only one linear memory And it could be grown on demand, but it's basically consecutive sort of items in memory so my question is that So what how we're gonna handle this in In the you wasm engine included into the way EVM So is it gonna be allocated every time we call the engine and then discarded after the coal or Is it going to persist between the coal somehow? So what are what is your current thinking and if it's actually if it's allocated and turned turned down How is this going to be more performant than could it actually could it be more performant than EVM? Because this is exactly what EVM does at the moment at the coal So what's your current thinking on that? anybody Torn down. I mean you can zero it out There's a third option you can zero it out as well because there's garbage left over from the previous run And maybe you did some previous run that it didn't end up being used so you would have to save this old stuff anyway So maybe zeroing out. I think it makes sense to either zero out or to Give a fresh chunk of memory. That's what we're doing currently in the test net. It's all fresh each time And do you think is it might be like a bottleneck in the future maybe like is your optimized all the all the like Compilation and everything could this be kind of remaining bottleneck of the evasum engine Yeah, of course, this is a bottleneck of any engine and anything everyone's gonna have the same problem. I Think it has to get zeroed out Otherwise you run a risk of spectra like attacks You just overflow or do something and then all of a sudden you might have access to code to execute on Somebody else's account and it might if everyone's running the same system. It might just cause problem that way Anybody wants to ask a question and comment or I mean, I'm gonna shut up after I've asked my question So don't worry about it. So I mean Am I correct? Let me just ask here a question. So I mean assuming that there are memory instances and they get zeroed out There's no I mean it would be part of a You know used up memory spaces would go back into a pool get zeroed and then get dynamically allocated another uosm call So it's not like it would be statically assigned for in any way. So I mean That's not very high performance overhead At all. I mean, that's how all network process, you know almost any kind of process like that My I mean, sorry, just to say yeah, the operating system will do most of the pooling anyway. So, yeah My other comment was depending on how often the pre-compile gets called it may not be a bottleneck because How many pre-compiles get called right one after the other right now? probably very few Right, so I'm not sure actually we're discussing a pre-compile like memory allocation in pre-compile Yeah, exactly I just wanted to make a quick point regarding the spectrum attacks that any kind of cache timing attack or side channel attacks in general Depend heavily on things like real-time high high precision clocks being available And that's not going to be part of the e-wasm semantics. This should not be a concern for us and another point about the performance implications there we're still Navigating through the performance differences between web is so many interpreters and compiler engines. So and till we You know make that leap to where we're dealing with compiler engines, then This is a very issues or this is a question number four So the question number three is that so interaction with the EVM state so essentially I Read some really old. I think you wasm proposals and there in order to Say when you're running inside the E wasm code and then you need to access some of the EVM state some asterium state Then you would have some kind of external functions declared So this is not the actual function which was declared But I just was too lazy to look it up So essentially the imagine that is a function which allows you to do some sort of s-load and then delivers Pulls something out of the theorem state and delivers into the E wasm context So alternative approach would be to not allow any of the state access from within the E wasm code But simply provide it as the argument So essentially like anything you want to tell the E wasm code about the state You have to push it in as the input and anything you want to modify after that You have to take it out of the output and put it in storage yourself, but this of course makes It's difficult or impossible to use E wasm for maintaining a large persistent structures like if you want to implement Like say a red black tree With the E wasm then you would have a problem here because you would have to first decide what to push in as the input But then by that time you already wrote most of the red block tree algorithm So so what is your current thinking on that? So I read the spec the WebAssembly spec I implemented it. I'm implementing it again now and I can tell you that Having exchange inputs and outputs would be difficult the outputs would be difficult because the output is limited to one value There's a proposal to the WebAssembly spec to allow arbitrary number of return values But they're still on their first version. They just want to make it as simple as possible for now So the output will be will hinder it But certainly we can branch you know We can fork the spec and return arbitrary many values and hopefully the WebAssembly spec will catch up to us This is one of many questions that you know should we fork the spec or should we not and for now? We're not forking the spec But as the input also an issue is we can only have Of the at at static, you know at compile time at deploy time You know exactly how many arguments there there are going to be so this would only work for fixed number of arguments Certainly, you know if you have a 256 bit value you would have for 64 bit integers or whatever so you would have to you know Do it like that? Right now. We're not doing anything like that. We are We do have arguments. Okay, so when you call in a contract. It's all just call data We're just mimicking EVM right now, but certainly it's possible even Gio and I have talked about this number of times that this we would instead of contract calls We would have WebAssembly function calls with arguments and things like this and that's that might be faster than having to you know get called data size get you know call data, you know move, you know Things into memory bring it bring it into our stack things like this. So this is a bottleneck This is one of the sort of micro optimizations that I think we're putting off a little bit But it's a very good point and I yeah, it's it's very interesting that yeah So current your current thinking is actually doing this this kind of way where external function can reach into the state, right? And then that's how the task networks Sorry just an extra comment You know what's one of the interesting part of the the whole was a model is the way you import dependencies How you import modules and one of the things you can do is actually either share an entire Memory with all your your modules or you can import memory From from other modules. So one one of the things we're looking at is but it would also depend on how Storage happens would be to do some sort of a map memory mapping and then Yeah, this is incidentally. This is what I was one of my kind of inspiration when I was Proposing a linear cross-contrast storage, right? I was hoping to get it closer integrated with Iwasm, but I think Yeah, would Got rid of this So in my last question for now Okay Yeah, this one So this is about the interpreter compiler guarantees I remember I first really learned about it when I talked to some of you in Prague So essentially there was you probably heard a lot of discussion about just-in-time compilers Essentially not being suitable for the for the for the things that we want, but I want to get your current thinking about I Remember we discussed that in phase one. We want we potentially want to just implement What I call like a dummy dump. Sorry, not dummy dump Straightforward interpreter which basically implements the specification one-to-one and then we put it into the put it into Ethereum and then we have a motivation to work on more sophisticated Interpreters compilers and things like this So your current thoughts on that Yeah, so very good. Um, you're exactly correct that could I could I just add a comment? It's not jits. It's not just-in-time compilers. There are other problems optimizing compilers that are the problem So it's it's not that it's doing it just in time It's that it's trying to apply some optimization algorithm that is attackable, but There are actually a number of jits that are not optimizing compilers. So they are just linearly scanning across them That might work pretty well Yeah, that's exactly correct. Firefox has a linear pass compiler where it Linear linearly passes through the code one time and it compiles it. It's usually two times slower than the optimized code Yeah, you're right that there could be we call it a jit bomb You give it a web assembly module and it takes a long time perhaps quadratic This was the v8 one used to used to do this where it the intermediate representation was Some sort of directed graph that had cycles for each loops And then if you had a lot of nested loops and a lot of nested control flow It would have this sort of exponential growth in in the compile time So there were these pathological examples that took a very long time to compile in v8 Thankfully v8's shifted away from this and they have now they have two compilers They have their baseline, which is a single pass and they have their optimizing one First they let the baseline Compile it so they can start running and then they have the optimizing one working in the background And then they swap it eventually. Yes It's very important as you're saying not to have these sort of jit bombs or compiler bombs The head of time compilers usually have linear passes and you have some optimization levels that take all of these You know a certain number of bounded number, but certainly some code compiles faster than others So I think the part of my question is that I remember somebody mentioned it to me in Prague that Essentially at that point there were no Existing compilers that would give you the guarantees that we want So first guarantees on the compilation time and second guarantees that the product of a compilation will also not be Bommable like will not be exploding and so that is when I suggested the idea of If we have to write the compilers ourselves Eventually because if there's no good compilers around we have to write it ourselves but in order to be able to Find the motivation to do this work We have to make it more certain that this compiler with this work will be useful So that's why this is one of the reasons. I really wanted you wasn't to be part of this kind of initiative that we want to put it in so that You know if for other reasons as all is I see it as a meta feature As opposed to their point features and so but it will also Even if we put some really kind of slow Interpreter or compiler in and it doesn't actually give you any any benefits But then this particular event motivates the work on the compilers which we could be performed by all Sort of range of people, but they will know that their work will be rewarded Not like the monetary and necessarily but in terms of so you know your the result of their work will be in the theorem yeah, this this this goes into the question of auditing verification and Certainly we have an end. We have a hundred fifty page web assembly specification. It's written down. I submitted some fixes to it So there are still some sort of typos and still some things being worked out in the web assembly spec But it's good. It's it's it's good from what I've seen and certainly I interpreted their What they wrote on paper and when I was writing my interpretation My implementation of the of the spec maybe there was a bug maybe I interpreted something wrong and maybe one of these Compilers interpreted something wrong and someone knows of this edge case one person that knows one at you know one bug in one Engine can sort of fork the network. This is a big risk and when they were writing it You know the Firefox engineer maybe wasn't concerned about consensus. They knew that they can eventually you know Push a some sort of fix in the next version, but we aren't concerned with consensus so I think yeah, what I think the point is it's reason I think it would be reasonable to You know when they submit the NIST the the crypto stuff to NIST they have Implementations reference implementations usually in C and they have some of them are verified so I think it would be reasonable to at the very least audit the implementations and it would be better if we can Verify them maybe have a computer check our proofs that in fact this spec was implemented in this code and you know A C expert knows that you know overflows of signed integers is undefined behavior and see so we need some flag For a certain thing so everything is language specific, but Auditing definitely I think we need it Verification would be great another option is Redundancy so one person would run multiple Implementation so we would have you know these these three compilers on one machine Executing the code and then you would have a best N of M. You know if N of them agree Then then there's some threshold so there might be some some sort of or they might if there's some disagreement You don't include that transaction These kinds of ideas. Yes. I think to start you're right Let's motivate people that we need verify. We need audited verified compilers. Do we trust the existing ones? maybe But yes, I think this is very important this I think this is thank you for selling I've been trying to convince my colleagues of this point. So thank you very much Alexi for Okay, so there this is end of my questions any more people want to I have one other comment I wanted to send this one here Alexi because this was a good point We didn't discuss this at all when we were just having our breakout session here I kind of wish we did This looks to me like this pretty heavily interacts with the gas metering question I just say that from the perspective of like whether it's a jet or even more and like phase two what Alexi was saying here On the ahead-of-time compiler. I mean there's you know Doing any compilation step is an investment in time now with the assumption that you're gonna run that code enough times to recoup That cost and more over time and so you know I think there should be a consideration of in the gas cost model. I mean If you're going to do a compilation step, there's a cost associated with that maybe you know the assumption is you do it every single time for simplicity, but Maybe you also if you knew it's just throw time throw away code for one time and you don't want to do the compilation You just run it in an interpreter mode and Yeah, and the other comment was the jit I think Somebody over here made the comment that it's optimizing compilers that can get into really hairy situations of spending a lot of time crunching and Optimization the goal in the assembly. I'm hoping with the Iwasm right is that we're trying to make it a very simple mapping From that to ISAs right, so there should not be Optimizing steps to some degree if we actually talk about the cost of compilation That's that's a metric for what the compiler is assuming It's going to be spending and it's time to you know, generate that You know generate that machine level code as well So my question is this this slide right now suggests that in jit compilers Optimizing is a fundamentally more dangerous problem than they had ahead of time compilation, but isn't it more or less symmetric? like you can you like it right now suggests that you cannot have a secure Ahead-of-time compiler, but if you use the same linear parses then ahead-of-time Compiler would work just as well as a jit compiler, right? I just make sure we're all talking about the same Terminology here. So ahead of time. I I don't take that to mean ahead of submitting a transaction a dynamic transaction ahead of time Where do you where do we believe the ahead-of-time is actually compilation? Yeah, so if we're just talking about pre-compiles then We don't have to worry about The compilation time because it's the pre-compiles. I mean they're pre-compiled so There'd be no attacks on You know on the compilation time And We were more worried about this last year Particularly with the with the jit bombs and we did fuzz test V8 and found A jit bomb for for two versions of v8 Since since that time somewhere last year there have been so now there's a V8 version called liftoff, which they claim is explicitly linear time there's other Fire there's web assembly compiler engines around Firefox that are supposed to be Linear time we haven't fuzz fuzz tested those to Verify that we can't find any compiler bombs for these other engines, but that's on the the to do I think the bigger issue with With compiler engines isn't even to the point of isn't so much worrying about the security whether it's security against consensus bugs or security against DOS attacks It's just having Compiler engine implementations available in the languages that the clients are written in so You know for parody there's a lot of good, you know stuff coming out of Firefox That's written in rust so that pair parody the parody kind could adopt and incorporate into the client but For the geth client for for go ethereum. There's really not Serious efforts at Compiler, you know web assembly engines that that we can use so that I think that's the biggest blocker right now So my second question was If I look at the way the EVM is currently used I would imagine that ahead of time compilation makes a lot of sense because a contract is deployed once and then Used many times even if you account for people wanting to grieve this by deploying lots of single-use contracts You could still maintain a simple counter that does like hey the 10th time this contract is called we ahead of time compile it and store the architecture specific Instructions instead of the EVM code Do you think something like this obviously for pre-compiles this makes a lot of sense? But even for an ewasm chain do you think that will be mostly ahead of time compiled instead of just compiled? Yeah, we'll see one of the ideas was actually to Compile one of the web assembly compilers to Web assembly and then run that in a web assembly interpreter In that case we already have a web assembly interpreter written in go that perhaps could Run the ahead of time compiler at deploy time and spit out some machine code That guess could use Just sort of more directly. Yeah You're right that sometimes it might be better to Compile things ahead of time. I think that's up to the implementation. We're only here for Writing down a spec and you can execute things however you want is the point But yes, I agree with you that it might be wise to ahead of time everything or most things I mean the only caveat I would say on that is that You know if you were trying to include the in a gas cost, you know the cost of compiling So that explicitly I mean because no matter what, you know, you can't get away from it compiling has some fixed upfront costs and then a lower runtime cost for more iterations whereas Interpretation is you know zero upfront and then a much steeper cross and they cross after some number of no matter what they cross after some number of calls and You know, I don't know I would like not I don't know Necessarily we'd want to say ahead of time and this is a phase two question So it's kind of far off. So we'll have a lot better data and I don't know that there's jit versus at AOT, I mean I think there's not gonna be like Java style jit here where you know, you have really big code And you're compiling parts it or whatever it seems like you're gonna compile a contract lump sum one time and then use the Compiled, you know machine level version after that or before that potentially you'd use the interpreted So it's just like somebody was saying here if like phase either you could explicitly say do it at Instant zero, you know, it's our runtime zero meaning the first one does it or you could set a Counter and it actually could be something that the contract writer could say they have advanced knowledge that it's only gonna be a one-time use Contracts that don't bother or they could set a threshold or something. So I think I don't know to me for phase two It seems like a bit of an open question Just my opinion Yeah, I mean I agree sure, please Okay, no, yeah, it should make sense I wouldn't trust the the person to actually tell you the truth like when it comes to Telling you yeah, it's going to to work only once Like you you don't need to compile it ahead of time because it will only be a single-use contract basically Yeah, because if you do that you can just lie say yeah, it will be a single-use contract Right Yeah, so yeah, what I wanted to say is you can indeed like try to learn from from the usage and and compile Right Yeah, we'll just add a comment on that ahead of time here is it's when the client is Executing the contract for the first time essentially right but before that When you make the deployment of the contract It is kind of rational to run an optimizer in Deploy Optimized wasm code which has already gone through Constant propagation and and and and The vast majority of tricks that you can do Intermediate representation level So the only optimizations That a wasm engine really has to do is is that the machine code translation level? Register allocation and stuff like that But but a lot of the heavy lifting has already been done all the control flow all so your returns on Optimizing are not necessarily that great either Whether you choose a jit or you choose ahead of time and and ahead of time being execution time, right, so it's still Has it been ten minutes yet or do we still have time for one more question. Oh, is there anyone oh? Okay, cool. Thank y'all. We're doing any more presentations or was it I got a quick one I Made it five minutes ago and by the way, we're not going to be using the podium mic anymore We'll use the handheld mics because the live stream doesn't pick up the podium mic for some reason very well. Oh No problem Normally I can do good with this kind of stuff though. Oh well my laptop needs to be on that's one thing I did that thing where I woke it up, and then I hit the power button, so I went back to sleep So this will be a second Stand by I think it's fixed now my computer was asleep. All right everyone judging me for using Windows say All right, there you go. Yeah, where's the start? Oh, there it is Okay, so this is talking and writing and stuff communicating your ideas to the aetherium community It's three or four slides long. I forgot I'm Hudson. Hello, so here's the problem The aetherium 1x stuff is complicated and some of us are more builders than communicators So whenever we try to bring things to the community, we're like, let's dump this like 30-page PDF document and then everyone give me feedback. No no no offense. Alexi just saying it happens and so Then there are some like people who give feedback, which is great It's usually less than 10 and it's usually not, you know People who are using the aetherium network who feel like they have a say in things who do have a say in things So things that are obvious to us are not always obvious to others So whenever we're coming up with these ideas, we're not always thinking of how this is going to look from the outside Words like state rent sound really scary. Some people have been saying Storage maintenance fee instead Earlier we were joking that we were going to call it tipping So it could be like a little bit easier to swallow like hey, you know how you tip your Uber driver You know, you have to tip the blockchain, but we're probably not going to do that Also, there's a lot of trolls. There's trolls from different blockchains. There's trolls from Yeah trolls from aetherium aetherium classic all kinds of stuff So they're gonna try to mess with your plans as well. So what do we do? first thing and I put right a simple blog but really it can be any kind of like like right up on medium or Right up on a gist or something. That's Accessible to normal people people who aren't very technical I should say not like we're weird or anything, but you know more normal people and After you do that and when I say simple I mean like Don't go into the technical one have a link to your technical document from that one. Just say I Want to do X. I think it's very important. I have run test the numbers are in the document and I think we need to do it by this time and I understand the other side of the argument But I will be explaining that in the other technical document and if we talk about just that much in a blog post It makes people feel really comfortable about the idea. It makes people feel like they're they can connect to it And they can understand it Get feedback Definitely listen immediately wherever you post it for people who are saying this is confusing People who are not giving any feedback if you don't get a lot of feedback Then that might mean your idea is still too complicated or your presentation of the idea is still too complicated Spread it on medium reddit Twitter aetherium magicians form getter Everywhere you can gonna bring up Alexi again. He does a great job with his rent present presentations of always going to Twitter. I see it on Twitter. I see it on reddit I see it on the eth magicians form and on getter whenever he comes out with a new rent proposal document So spreading it out like that make sure it reaches the widest audience We don't have as many Signals and we don't have as many Platforms as we really should right now like I just listed the top five and I can't think of any more than that Maybe telegram if there's some really cool telegram groups The next one and this one's overlooked a lot of the time get support get support from and more like endorsements is the better word from core developers and from people in the Community that people really trust So if you have a group of like most of the people from the core dev meetings commenting on something that's that's gonna go through that might be normally controversial if enough people who trust the core Developers here that they trust the idea then that's gonna make it much easier to go through cleanly Like some ideas are like real simple and they get through really easy like whenever we switch to snappy compression You know between geth and parody. There was a little bit of some like I guess discussion amongst people about that But it wasn't anything major and the community didn't care at all because it was too technical And also didn't really affect them except behind the scenes a lot of the stuff We're talking about like state rent and some pruning initiatives does affect the community So they need to know how they're affected. They need to know what's gonna happen You know if we go through with this change and the like the reasons we're doing it They have to be really solid reasons So getting endorsements from really smart and admired people in the community is super important Iterate so if it doesn't work the first time Try to do a message in a new way in a different way with more endorsements with more people talking about it And then finally win which could mean that you know your idea works and Everybody wants to do it and we implement it or it doesn't in the community doesn't want to implement it If there's one thing you take away from this It's that we can't do anything without community support. Otherwise, we're a cabal And we don't want to be a cabal of developers We want to actually listen to the community no matter how non-technical they are Because they're the ones that we're catering to the user the stakeholder in the ecosystem They're the ones who matter in all this so if they say rents a bad idea I don't want to pay for it But I can but I will trade that off for the network being slow and unusable then that's the trade-off they're choosing So getting that support and getting the signals to realize that support is one of the hardest unsolved problems But I think that there are ways to accomplish that and I'll keep it here while we get some comments from the audience I just wanted to add a quick comment I think it's maybe less relevant for people here But relevant for the community at large is When proposals are brought to the table like you always have to keep in mind that all the client dev teams are Extremely busy and just don't have a lot of time to do stuff And so if there's a proposal that takes a lot of work from a client We might love that proposal and really want to do it but on the core dev calls it'll be like yeah, that sounds great, but no, sorry and getting an idea of what it takes to implement your proposal and sort of Putting in the work to show that you know what you're talking about and how to get it done Even if you don't do all the code yourself Even if that means you interact with them and you can guide some of the client developers to how to implement it That goes a really long way with actually getting it done Thanks for a very good point anybody else have comments or questions anybody Well cool. My last slide has another picture of a doge and a doge costume Thank you everybody. Does anyone else have any presentations? Wonderful. Oh, that's awesome Do we have enough time for two a lexie or whoever? So there was like a two-hour breakout period that doesn't really need to be two hours I think so we can have presentations go a little bit over into the breakout period Do at least one that's a good idea. I'm curious about what you guys are up to. Oh They're clearing out the food and coffee has arrived Food might already be gone, but it's someone's grabbing it up. There's cookies Brooke has confirmed. There are cookies Yeah, a five-minute break and then we'll come back Internet sorry, we're gonna come back later about five minutes from now. We really need coffee Check check all right Let's head back in and take a seat or a stand or whatever you want to do because we're about to start Do you know how to do the extend screen thingy on a Mac? Wonderful and this will be the last presentation Then we're gonna go over the objectives again, and then we're going to have some breakouts is my understanding Sounds good Leave it open I'd say and I definitely use this handheld all right We're gonna get started here. We have Zach who's gonna talk about white block stuff. Can we get a chant going? Zach Zach Zach Zach Zach Zach Zach Zach Zach Zach Zach, okay good. Thank you. I'm getting you a cookie cake All right. Hey, I'm Zach Cole CTO white blog I want to talk about like simulation stuff and Testing in regards to testing. I think this is gonna be applicable to a lot of you folks so I'm gonna kind of like breeze through this because I think giving you a practical demonstration rather than like reading off of reading off of Slides is a is good is better. Hold on. Let me Okay Cool. All right. So yeah, so first I want to clarify the difference between like a simulation and an emulation A simulation is like a mathematical model and it's really only as good as the data sets that you provide So it's really hard to account for things that you can't really predict or don't really know about So if you have a good enough data set you can run a lot of simulations. It's really fast and efficient So an emulation is more of a functional model that can actually like replace systems, right? So with a simulation, you're just mathematically modeling different processes within an M within an emulation. You're actually Replicating these processes and you're you're you're doing them. It's practical. The system is actually functioning, right? So that's good for acquiring like large data sets that can be highly accurate depending on the setup, right? But it's more indicative of a real-world performance. It could take more time, but So The world is a big place There's a lot of stuff going on so what I'm presenting for Acquiring these data sets is kind of similar to what we're doing already for like ETH net stats, right? Everybody just provides their own nodes. We're collecting data. It's cool But we don't really have granular views of this data So we don't know how accurate it is or valid the data is going to be because we don't have control over those nodes So we're not really aware of all of the environmental conditions that are involved That that provided that data, right? So what I think we should do because one thing we were talking about was like how do we calculate block propagation time? For instance, does anybody know? Like does anyone have a practical? Answer for that Okay, all right. I was just making sure all right So what I'm proposing is like we can set up nodes in like different regions and those nodes we control them Globally, so we can deploy them on the cloud or whatever They're just pretty much light clients that are passively receiving data and writing new and shared database and like Kafka or something like that So that allows us to understand more granular granular Understand exactly what's going on within the main net so we can acquire these relevant data sets Does that make sense to you guys so far? And then that's for simulation purposes Because Ethan that stats is only going to take you so far because we don't really know So that'll allow us to run better simulations. So Then on the emulation side, we're working white block. We've developed a testing platform That allows us to like provision multiple nodes running whichever client you choose To configure the network links between these nodes So each node exists within its own VLAN and is assigned an IP address That allows us to provide logical separation between those nodes who are each running independently of one another And then we can configure the network links between those nodes with like packet loss latency or bandwidth constraints So we can actually replicate a live functioning network that is highly accurate because we can observe how this client performs and How it responds to different environmental conditions, etc. And then we automate processes so like we're generating transactions They're real transactions because it's actually running the Gath client or whatever client we want it to We're automating all of that so we can test like TPS and what the effects of Network latency or packet loss or different bandwidth constraints have on TPS. That's a pretty high-level example We can also implement forking conditions So we can test consensus and like observe what happens within the network when these When these went when the blockchain is is segmented, right and forked And then we can do a bunch of other stuff So it's an actual full mesh network. So it's it's real I just wanted to go over some tests that we ran earlier like First we wanted to just start with some basic stuff It was like what effects do does latency have on like minor profitability? So we set up a test network comprised of 12 nodes and they were Divided into two groups And you guys aren't seeing the whole thing Sorry I'm just gonna show you guys like this if you don't mind so The control group each node had equal computational resources but we applied and no latency was applied and The test group we applied incremental latency to each one of them And we wanted to observe the wallet balance after a period of time when they mined And that was what we used to gauge to validate our hypothesis so The results at the end of the test were that the the the control group had a 25 percent Avery higher Balance than the test group So what are the implications of this it's like latency increases block propagation time higher block block propagation time reduces transactional throughput because there's an increased uncle rate and That weakens the absolute security of the network because not all of the hashing power is going to actually securing the network It's just going towards mining uncle rates or mining uncle blocks, right? so Then I was at I think it was like ed con last year Sean Douglas from a Amber data was talking about how when the uncle rate is whatever like X You don't actually need 51 percent control of the network in order to engage a successful 51 percent Attack and I sent him an email and I was like cool, but you didn't really provide any data It was just like literally a bullet point. So I asked him for the data and he never responded So I was like, you know, we'll just run the tests on our own So that's where our next our next test goes So we actually just built it out on our own So we had four nodes We implemented a really high uncle rate forcing a high uncle rate through latency and then we had one node that was like a super node and It controlled approximately 46 percent of the network hashing power We had a low block time because you were just really trying to observe like these effects There was a 57 percent uncle rate was what we measured as a result and we ran the test for a thousand blocks. So These charts are kind of wacky. So let's go at the end of it The node that had the lowest latency and the highest hashing power Controlled most of the blocks that were produced. So it's pretty interesting So what do we do with this information like one? We could like raise the block time and the gas limits per block higher gas limits or bigger blocks more transactions Luckily, we didn't have to really need to worry about that because we started working with Ubic They hired us to do some tests and Ubic was like an early Ethereum fork It's pretty much just a geth fork and they implemented their own custom difficulty algorithm to target an 88 second block time Which means they have larger blocks and they wanted to see if they could raise the default gas limits from 4 million to 30 million and They wanted to make sure that it wouldn't negatively impact The performance and they wanted to benchmark that against Ethereum as well So we set it all up Hold on So at the end anyway, I want to get I want to just show you guys a demo because I'm not really good at doing these presentations I'm so sorry. I'm very good face-to-face though. So you guys should come talk to me So So throughput was higher for you, but under the same conditions The difficulty algorithm implied higher stability more consistent block times Uncle rate was 2.7 percent for you big and 27.6 percent for Ethereum under the same conditions. So I mean this just goes to show that there's a lot of Things that we need to test still So Does anyone have any questions so far? Anybody this is making sense to you guys. Okay, cool. So we're all here. Let's see Figured I could show you guys a demo of what we're doing. This isn't this isn't it. This is just my terminal Do you guys want to see a demo right now? Should I do that right now up here? Okay, cool No, all right, so I already had a Blockchain built out here. So we do All right, I guess I messed something up. So I'll do it later. Yeah, I don't want to waste any more time But do any of you guys have any questions so far? So I was gonna say that we could Get you to or for to come kind of sit in the corner for the yeah I would session and people can just come into it directly. Okay. Yeah. Yeah, if anybody wants to ask a question now I don't want to shut anybody up Right. Cool. All right. Thanks All right, we're gonna do a breakout session for an hour and the then the objectives are at 430 By internet we're stopping the stream for today. See you tomorrow