 I spend my time for Ethereum, working on the Virtual Machine, working on improving its performance and working on designs for possible successors to what we have in order to solve some of the performance problems that we're running into. If this works, there will be another slide. How about that? The first problem you run into in any sort of optimization work, according to my old friend Jerry Schwartz, is one, all benchmarks are bogus. You'll never have a set of benchmarks that actually represents the real world. But if you don't have benchmarks, you will just go in circles. You'll never make progress. So the benchmarks I'm working with are a few algorithm kernels that are relevant to what we're doing. RC5 is an old and useless cipher, but it's a good example of a cipher that uses a lot of 32 and 64-bit arithmetic, a lot of complex logic. And Blake 2B is still an important hash function. It's also a lot of 64-bit logic. Blum Blum Shub is a cryptographic random number generator. I think it's one of the slowest in the world. It operates on big registers, so we can use the 256-bit registers of the VM effectively, and ECMOL also can use big registers effectively. And then I have a few tests of individual EVM operations, and those are small EVM assembly programs to try to isolate individual operations. And this is a graph of what I got with the whole thing. And it's a bit complex, but what you can see down along the bottom is the different benchmarks, and along the right are the first three are some major clients. EVM is our go client, Parity is a Rust client, EVM is the C++ client. The rest aren't clients yet, but EVM to WASM is part of the WASM research. It's a program that takes EVM code, translates it into WASM code, and then I fed that to Google's V8 engine, which generates assembly code, EVM JIT, Pavel was just talking about it generates assembly code directly from EVM code. And then the native C++ is I rewrote the benchmark programs in C++ instead of Solidity or EVM assembly and compiled those to assembly. So pretty clearly the C++ wins the race, and pretty clearly exponentiation is pretty hard for everybody, which isn't surprising, but it's a little concerning. There might actually be possible exploits by writing contracts that do exponentiation and get charged only a few gas, but take a whole lot of time. And RC5 looks pretty hard because RC5 works on dynamic shifting, and EVM does not yet have a shift operator, and so it gets imitated with exponentiation. And in between things are relatively regular, and the speed you could actually predict by the language that the client is written in. Where did that come from? Next slide. I can't see the screen with these glasses on. So to simplify it, this is looking at one angle, it's a harmonic mean of the performance of each client, and it shows pretty much the same thing. So clearly the interpreters are not as fast as going straight to machine code by any route, and clearly some interpreters are better than others, but they've all been good enough so far for our purposes. And of course I love car races as examples. Last year somebody in the audience shouted out that instead of using classic cars burning tons of gasoline, I should be using Tesla, and Teslas are nice. Is George Hallam here? Are you here, George? George would agree that rather than a Tesla, this would be much cooler. This is a 68-must tank hatchback. Under the hood is actually some powerful electric motors, and the trunk is full of lithium-ion batteries. And if this works, where's the button? We will see how this does against Tesla. Is there any sound? No such luck. But there goes the Mustang. Tesla doesn't have a chance. So a digits missing, but after a quarter mile it had gotten to 140 miles an hour. That's a lot. So what keeps those interpreters from reaching native speed? And the first answer is they're interpreters, so they've got that overhead. You can work hard, you can reduce the overhead, but generally you can't do better than about three or four to one compared to native code. And for our particular interpreter, the 256-bit registers slow us down, because real hardware has 32, 64-bit registers. And the unconstrained control flow hurts us a lot. And I'll get to that. The 256 hertz, if you remember, grade school math adding, multiplying, two numbers is pretty easy. You can do it in your head. If you have four numbers, suddenly it gets a lot harder. And it's quadratic, so it gets worse and worse. 256 squared is a lot. Control flow. The jump operator in EVM, I mean, go-to's are considered harmful, but at least if you say go-to label, it will go to one and exactly one label. In EVM, you say go to whatever's on the stack. So there's no way of knowing often statically where it's going to go. So you can have a nice little program like this, and F calls G and H and returns, and it calls I and returns, et cetera. Nice clean structure, no trouble. No trouble to understand static analysis, anything, no trouble. What's it actually look like to the EVM? That's what it looks like to the EVM. So if you're trying to do formal analysis, if you're trying to write a compiler, if you're trying to do anything with it, again, the number of paths goes up quadratically and you're in trouble. Because pretty much if you can't do it in linear time or at least analog in time on the blockchain at deployment time or at run time, you can't do it. So how do we do better? Well, EVM JIT is already doing better. I won't back up, but if you look at the slide, the EVM JIT is actually pretty close to the native speed. It does very well on the wider arithmetic and not so well on narrow arithmetic and complex logic. But it's a very good JIT. I've told Pavel, he gets to be the electric fox. He's tired of these little three and four letter names that don't mean anything. This is a racing team out of Latvia. That's a completely electric dragster. And here it is, winning the European world record. There it goes. Drag races are fast. It's not impressive, is it? It's over. Hello? 275 miles an hour. Just a few seconds. Love these things. So we've got two research programs that have been going on how to improve things. They've been nicknamed EVM 1.5 and EVM 2.0, which doesn't really mean anything except those that are nicknames. And for 1.5, that's a suggestion to extend the current EVM by adding new op codes and requirements. So we forbid those unconstrained jumps. You just, we will not allow you to do that. And we then have to provide a way to do the things that you otherwise do with those jumps. So there's op codes for subroutines. And then we've got to get away from having nothing but 256-bit registers. So we've got op codes for native scalers and op codes for SIMD, because real hardware has all this silicon devoted to SIMD registers. And if you go to Google and type SIMD crypto, you get a lot of results. So it would be useful to make that hardware available. And at deployment time, there's a validation phase that goes through the code and make sure that it actually does follow the rules. What a concept. And then 2.0, well, gee, it provides op codes for structured control flow. It's stricter than 1.5. It actually looks like a high-level language with if-else. And such. And it provides op codes for native scalers. It's SIMD, the SIMD's coming later, but there's a SIMD proposal. And it also has a validation phase where it validates control flow and stack discipline and type safety. So they're very similar at that level. So we've got some technically very, very similar proposals. They both provide for very fast compilation to native code. It could be done as a JIT. We've come to realize Ethereum cannot do JITs. They are actually exploitable. That is, if you find or write a contract, which takes a long time to compile with a JIT, but requires very little gas to run, and you start hammering those contracts, you can do a really nice, you know, DOS attack. So if you're going to do any compiling, you've got to do it up front at deployment time. And Martin's notion of transpilers, I think, is very important here. 1.5 could be transpiled to 2.0. 2.0 can be transpiled to 1.5. 1.5 or 2.0 can be transpiled to 1.0. Either of them can be transferred to the JVM. You can make up new ones. Pretty much, you know, you can compile any VM you wanted to some other VM. And he also has a notion of gas injection. So you put little pieces of code into the right places to count the gas. And what this means to me is it doesn't matter what execution engine a client chooses. Because on the blockchain, you can put a contract that translates into that execution engine. So these are, these can be completely independent choices. And we could actually choose to support a number of VMs if we wanted to. And independent parties could decide to support a different VM, put a transpiler on the blockchain, and away they go. Gee, I'm almost out of slides. And there's no sound. That's really too bad. Yeah. So what, what is the big deal about native performance? And what's the big deal about being lean and mean and close to the metal? Well, we saw the electric dragsters. Here's like a real top fuel dragster. They run on a mixture of diesel fuel and nitromethane. Just about a month and a half ago, not too far from Ming's place in Michigan. This guy got the world record and it's over. 338 miles per hour and a quarter mile. These guys pull about five or six Gs, which is about the same as an astronaut taking off in the space shuttle. But there's a problem with going fully native. In the Sieb world, we call it undefined behavior. No, that's the last. Come here. Okay. Play, play. There it goes. What's going on my screen? We'll try it again. To do the driver actually walked away unscratched. I love this guy. We'll give you one more. Boom. It turns out I'm done. So I knew it. Audiences love explosions.