 So now I'm delighted to pass you over to your buddy who will begin my presentation. Here you are again. Yeah, so I want to talk about Turbo Iwozum, so that's a working name. It's actually a reference to Turbo Geth, some of you have heard of it. It's a project by Alexei to work on to improve the speed of Geth and we want to leverage Iwozum. So it's a colleague and I from the Iwozum team, Paul Porjanski. We are working on leveraging Iwozum to try to make Geth a bit, or at least the 1.0 client a bit faster. So as a quick reminder, what is Iwozum? So maybe we should start with WebAssembly, what is Iwozum? It's a binary format that you can run in your browser. So you can write a program in whatever language you want, compile it to the Iwozum format, and then your browser should be able to run it. So now it's not only used, even if it's the initial intent, it's not only used as a binary format for the browser. You have a lot of other applications, you have people running it, you know, in what's called ring zero, like for amplitude application. So that used to be done in the 90s with the JVM and now people do it with the Iwozum VM. And yes, you also have blockchain projects that use it, for example EOS and of course Parity uses it for, actually is it for a Polkadot or is it for a Substrate or both? Both. Okay, thank you. Anyway, so how does Iwozum compare to Iwozum? Well, it basically was regular Iwozum except that you import functions that would initially up until now corresponds to specific instruction in the EVM. So functions like get the coinbase, get the code of the caller, things like that, that used to be instructions, now they are just function calls, so you just import them. So how is Iwozum going to be deployed in Ethereum? Well, you might have heard of Ethereum 1.x, which has been discussed for the last month and a half. So it's basically like improvements to Ethereum 1.0 to make it more scalable, waiting for Ethereum 2.0, which is now nicknamed Serenity. So until Serenity arrives we still need to make it 1.0 scalable. So that's what we're working on at the moment and it has been decided in Prague during DEFCON that it's actually been proposed, it will be decided next year. But we're going to use wasm as the language for precompile. So precompiles are special contracts that are called often and they are going to be rewritten in wasm, they're going to run wasm and if you want to extend it you can write the contract once and for all and you don't have to re-implement them every time you have new clients. And when it comes to Ethereum 2.0 wasm is the prime candidate to be the execution engine for each shard but that's further down the road so there's no promise of that yet. And yes, I'm talking on behalf, at least I'm part of the wasm team but just because we're working on wasm doesn't mean EVM is going to disappear, like the old EVM is going to disappear. It's still going to stay first because we have all those contracts that need to remain and as I'll explain in a couple of minutes, wasm is still working progress so there's a lot of reasons to keep investing in solidity in EVM and all this environment. So yeah, I was saying EVM is going to stay, there are some trade-offs but when you compare to the current state, there are different, there are some challenges, maybe some trade-offs you have to overcome. First is binary size because solidity compiles some code. It's very streamlined, you start executing it, there's no real transformation. It's pretty simple, it's pretty bare metal. When it comes to EVM to wasm or wasm, you can take whatever language and those languages they don't know, like those compilers, they don't know you're targeting any specific like a blockchain environment so they will produce a binary that has something called runtime so there will be a lot of cruft that is basically here to prepare for anything that can happen and yeah, all that cruft takes a lot of space. Some languages fare better than others, clearly C is the best, Rust is slightly better, slightly worse and Go is absolutely horrible because you've got the garbage collector, so yeah. I mean, each language has its specificities but some of them are adapted to this specific application or not. Yeah, like I was saying, the wasm spec is still evolving so we're not, I mean, it's going to remain more or less what it looks like right now, but we could have some surprises so when it comes to making sure that every single client does the exact same thing that's a bit of a challenge. And yes, the biggest trade-off when it comes to wasm is that like I was saying, you have an EVM binary, you start at the first instruction and you execute it all the way until you stop. In wasm, you require a little bit more work, actually a lot more work, you need to transform, you need to validate, you need to validate the binary, it's not, yes, like you have to go through the entire program, check everything, it's fine, all the loops are properly completed and things like that. So it's been designed for an application that would last, that would run a long time, so same Gmail, you have Gmail in your browser, it could be written in wasm. Clearly, you're going to spend like 10 seconds waiting for Gmail to load but once it's done, it's going to remain, it's going to be running for hours and hours at a time. In the case of smart contract, the load overhead is a bit of a problem because the smart contract executes and then you have to load another module which corresponds to another contract. So it's not as easy or at least it's not as simple and it's not as fast. So that's one of the trade-offs we have to deal with. So until now, I tried to describe pre-compiles as libraries so it would be some contract that you call. And yes, like you could optimize it this way. In wasm 2, you load it once and for all. The module is there and you keep calling it again and again. But at this point it starts looking more like a service and that's really what the core of this proposal is about. Start thinking about pre-compiles more like a service than a library. So we keep them permanently running. And because in regular operating system, which is where Paul and I, the world that Paul and I come from, it's an operating system, a service system to have better access rights. So we want to explore the idea of giving those services a bit more control over what the client does than regular contracts do. So we would like to give them access to the transaction pool to be able to map memory. And yes, that's what I'm going to talk about. The main domains where we could actually improve or at least offer something or experiment at least. Scalability, of course, because that's kind of the thing wasm is here for. Storage. Whoever has implemented a client knows that storage is a problem. If anybody has done a full sync, which is what I'm doing right now at home, it's taken three weeks and I'm not even halfway done. Parity. That's right. Yeah, parity, yeah. I'm going to the parity office on Monday, so I'm not saying anything. And yes, consensus. So that's the one that is a bit more science fiction-y. I don't know how to present that, but I just want to throw a couple ideas. And yes. So let's start with scalability. So I just want a quick reminder about parallel execution. So right now, you need, when you receive a block, you have transactions and all of those transactions get executed before the mining starts. And it's very important that it's sequential. I'm going to explain why. What we would like to have is something where, yeah, so what I meant, if you look at the second CPU, it's idling until the mining starts. It's not wasted, it's not a lot of time, because clearly mining dwarfs everything else. But because you have things like uncle rates that go up from time to time, being the first to the block is quite interesting. So what we would like to do is the second case, which is to have all the transactions spread between the two CPUs and be able to get to the mining faster. And that's all the more important, because when you're actually syncing with the network, the mining is gone, so the transaction is everything that's preventing you from being up to date. So why can't you really have transactions that are... Ooh, okay, interesting. Yes, but at least the picture is still here, so that's good. Yeah, so to try to explain why parallelism is a bit difficult, I proposed three transactions. So those squares on the left represent the states, and when you apply the transaction, each transaction applies touches of rights or reads from squares. So the transaction one will be represented by reads of rights to the red square, the two to green square, and so on. And now, yes, next slide still exists. So you can see, for example, if you perform transaction three before transaction two, you get a different result than if you get transaction two first and transaction three afterwards. In the case of transaction three and transaction one, it's not a problem, because they don't actually write to the same area. But yeah, you see the problem. So what needs to happen is that the exact order of the transaction needs to be reproducible, because if you don't, you just dump all the transactions in your blog, someone sends the blog, execute the transactions like in parallel again with a different number of CPUs or something like that, and it turns out the order is inverted and therefore you have no consensus and you have a form. So yeah, like there's... difficulty in partitioning is really determining which non-conflicting transactions can be run together or concurrently. If two transactions touch the same area, it's better if they're run sequentially and if they don't touch the same area at all, you can separate them and run them in parallel. Yeah, so one idea is that partitioning is really a generalization of sharding. So if you look at sharding like a big global state, you want to be able to run on each shard a parallel process, or you want to be able to do what we want to generalize it to, is inside a single shard to be able to create partitions and write to those separated partitions. So there's an EIP that already exists. It's called 648. It's been created by Vitalik. And the idea is that it requires a little tweak to the current model where each transaction has to declare upfront or beforehand which addresses it's going to touch. So in the first case, you have the transaction pull on the left and each transaction says I'm going to address those addresses. And if, for example, the number one and number two overlap, so the transaction scheduler will say, well, clearly those overlap, so I'm going to put them, they should be executed sequentially by CPU one and three and four, they overlap two, so they're going to be executed sequentially by CPU two. And then once it's done, you just wait for completion and then you start mining. One of the things Paul and I are working on is live partitioning. So the idea is to do more or less the same thing but without actually changing the interface. So the idea is simply you run everything in parallel as if it was always possible. And if you detect that there's a conflict, you drop the second transaction, the second writer, and you put it after or in a different block, yeah. So the way that would happen, so you have two CPUs, two transactions running on two different CPUs and you have the global state at the far right. So transaction one is writing into locations, transaction two in one location. And then in the next step, transaction two tries to write in the second location but it's a location that has already been written to by transaction one. So what we do is simply idle. And it feels a bit wasteful but what you have to realize is that if you compare it to the current state, you wouldn't be using CPU 2 anyway. So it's like you're trying to cut the line if you don't get caught good, if you get caught well too bad but you don't go to jail for cutting the line, so that's fine. Another execution model that we're looking into is the classic map reduce. So a lot of people I assume are familiar with this because of Hadoop. So the idea is to go even deeper inside the state and realize that not all transactions address the same state, the same area of states. And that means that you could technically run several transactions in parallel at calling the same contract but not touching the same area. And you would need some special kind of service for scheduling that we'll get back to. So this is a fantasy code of course but the idea is that you have an array that contains all your tokens so this is some kind of ERC-20 token. You have a first function that tells that the scheduling contract or the scheduling service would call to tell you, I have two transactions. Are they conflicting? So you just check, so this is a very simple example. You just check that the to and from are all different and if they are different, you just return a true. So that means, okay, the scheduling service knows that it's possible to schedule them together and then it will select to... It will call, for each transaction it will call the mapper function on a different CPU but because those transactions are not conflicting it's safe. So on the more functional presentation you have four CPUs and four transactions but two of those transactions write to the same location so at most three transactions can be loaded or executed at the same time. You assign three CPUs to it. The fourth is idling or doing regular contract management and the third transaction is pushed back to another block or later down the list. So that was it for execution. For scalability, execution scalability now. I want to talk about storage a bit. So yes, the first technique that we use a lot in operating systems is caching. So you just cache a bit of the state. So right now I was talking about Ethereum 1.X or 1.5 before. There's a proposal still by AXI, the TruboGaith guy who just proposed to have some linear space so apparently it's not making it to the final draft but this is still an interesting idea I find. So map some areas of the space linearly and cache it. Get is spending a lot of time waiting for IO so what we want is to reduce that. We just want to keep that in memory and only write so often so spend less time just waiting for the disk to respond. So once you do that you can actually do so that was the state, the memory. We can do that for the code as well so I have a bit of an over complicated diagram to explain that but you still have the transaction pool here and you have a cache of contracts and you can see that transaction 1 for example at transaction 3 all correspond to the same address so they want to call the same contract and it's been cached. So the scheduler is going to say okay they clearly access the same contract let's put them in serial in sequence on the first CPU and then we see that transaction 2 also corresponds to a cache contract so we're going to ask CPU 2 to execute it first because right now the cache is still containing the contract so we should benefit from that and then transaction 4 is not in the cache so you execute it afterwards and load it from disk it's a bit of a pain One thing you can do however that's a diagram that has been used at Defcon is while you're loading that contract you can actually while you're waiting for the disk to load the thread is running on it will just lock and give way to another thread so more work can be done that's roughly what we're working on and now I just want to like I was saying something that is a bit more science fiction but I think it's a pretty exciting idea so I'm talking about it anyway and it's to give to those services to those contracts part of the consensus and the idea is that I was talking about the model like the MapReduce execution model before I was talking about the other that I didn't give a name to but let's call it the commutative transaction model I don't know and those things could depend on the type of traffic if you are at the time where you have a lot of cryptokidic like or an ICO a lot of transactions looking the same you don't want to use the same the same service as if you're executing a regular contrite that just an ERC20 token for example so it would be interesting to actually be able to send a transaction to some service to explicitly address that service and the service will schedule those transactions for you and from then some miners can decide to run some service or not and that's pretty okay because actually in Ethereum you can still have blocks so you can have several blocks that were all generated using a different execution model and of course the only thing that matters is that if your contract is not available or if the miner doesn't want to run that contract you should always be able to fall back to the standard execution model and now I start using the big words governance yes so there's always a debate governance is a big debate so why I find this idea interesting in spite of being a bit a bit undefined still is that you don't have to really vote if you want to have a fork to agree with everybody you can just deploy your own service if people for example if miners don't agree with your service with what your contract does what your service does they will refuse your transaction so you can go to a different miner to propose that transaction and the only thing that matters with that model is that even if you refuse to access to run a given contract if you end the block that does use it like there's a transaction that did use it you need to be able to execute it so the hope is that it will result in less force or at least reduce the need for forks but yeah that's still up for debate and I'm really interested in discussing that with people yeah so as a conclusion yeah like the reason why we offer that we want to work on that is because we believe they are advantage for miners simply because they get to generating the block faster and hopefully that also translates to an advantage for the user because if it costs less to generate a block hopefully the gas costs are going to also be reduced and ultimately when you send a transaction you wait less time which is always the the goal and on this that's the end of my presentation thank you I need to have any questions you are answering questions do we have some questions it's way too technical for me but I'm sure we'll have some more thank you there so the the second part is like the storage right so those ideas they look kind of generic in the sense that they could be implemented by any client what's the is there any relation to you wasn't right so indeed like some of those ideas are already implemented in gas for example there's some caching in fact Peter currently is doing a lot of work on that actually is it Peter Felix I don't remember but one of the two is doing something on that so why is it related it's not so the reason why it doesn't have to be wasn't the thing is I was just mentioning that because the caching could be used as an indicator of what transaction should be scheduled next so this is just connected to that idea but clearly caching is not a novel idea and yes it's already in use and obviously internalized did you measure how much of the current traffic could be paralyzed yes and no in the sense that yes I started measuring but for that I mean my note to sing the entire blockchain and unfortunately I didn't reach I have some pre some pre diagrams at home but they just where I am at the moment is right after the Dow hack so you have a lot of conflicts going on people are calling the same contract over and over so it's not giving me an overview but yes I'm looking into that and I will I intend to publish that hopefully before the end of the year this thing takes a lot of time I might not have the best computer the best network connection and so on but yeah it's been taking me three weeks so far have you started about using your Sleeker network and just walking to someone with a note yes I have started that except some people who do that it's a $800 a month cost so yeah I mean I asked a couple people but they were not really happy to have me run scripts that take forever on their on their machines but if you have a note that has already synced please talk to me I would love to yes you do yes except all my code is forget but I'll be talking to you soon do we have some more questions so I would say thank you very much youngs let's go