 Hallå, min namn är Martin Holstsvende och jag är de sekuritörer som leder för deras terranfondation. I dag ska jag tala lite om EVM 4N6 och managande avtäckts på networket och hur vi har varit med på det. Så jag har varit en sekuritör för en år. Jag började just nu för DevCon 2 i Shanghai senaste år. Det började med Shanghai-attack, ungefär en dag efter att jag började min nya roll. Det kämpar om för en månad. Vi har också sett, i de senaste åren, att vi har gjort tre hård forks. Vi har haft en onintentionell konsensusplitt. Det var en dosattack, speciellt med Geth Client. Det har varit miljoner av Ether som har stålats i en mer eller mer eller mer endländs-suffistiskat attack, både omkring och omkring. Vi har testnätet totalt brått till sina nivåer och resurrektat igen. Och det är också standard IT-insidan med liten databases. Ska nån ta över nån telefonnummer och hänta och attacka Github och sånt. Så vi ska alla vara väldigt klara om där vi är. Det är krypto land och vi är alla i krypto land. Och det är som i Australien, där nåt med hårdbyte försöker killa dig. Och om du gör en människa, så är det säkert. Så. Men för attackers, de har. Never had it better. De no longer need to hack point of sales computers and trade carding details over shady forums. De kan just hacka computer and or somehow get some cryptocurrency and immediately turned into value. And it's so it's like a wild west in Australia right now. These are the Shanghai attacks. I'm not going to talk that much about them. The first Shanghai attack is a little blip down there. And then it just kept on going for a month. And it was a lot of different attacks. Mostly targeted towards gap, but when the dust is settled after incidents happen, then that's when you can actually do something about them and think about how can we be better prepared next time? Something similar happens and how can we prevent it? So how can we improve the readiness and the resiliency? So for readiness, it's about detecting attacks and performing analysis quickly. Så we started improving that with some monitoring, adding up some monitoring notes that were running the cloud and adding some graphs. Turns out there was some inherent issues, which hadn't been noticed before with transaction propagation inefficiencies, which over the course of a few months in the beginning of January 3 to March. We managed to bring down the overall network traffic with about an order of magnitude just by removing invalid transaction propagation from the clients. On these monitoring notes, we also added some interface so that we can extract very detailed information about what are the canonical blocks in the chain. And if we see a consensus split, we can get very detailed information about the receipts and the differences in these and quickly point out which transaction caused these consensus issue. So here you see, there's a geth master and geth develop and the parity node. And right in this image, they're differing on two fields marked in red there, and that's because parity RPC interface exposes a few different fields than geth. Now, as we go into analysis, I'm going to talk a few words about the EVM because there's, there might be a conception that any minor difference in the implementation of the EVM will automatically result in a consensus failure. And that's not quite true because there are some things, some parts of the EVM which are ephemeral, such as the memory on the stack, which do not necessarily trigger consensus issues. But they're very interesting because they can be used to trigger consensus errors. And in order to really measure EVM side by side and detect implementation differences in EVMs, we need an kind of up by up view of the internal state. So we push kind of hard to get common output format for EVMs, so that after each instruction made in the EVM, it would output JSON blob within internal state, as you can see on the left. And also a capability to use arbitrary pre-state and genesis configuration with the raw EVMs. So one problem that can arise is if we hit by an attack, which blows the node out of the water, how can we analyze that? Because our node just died, right? How can I analyze transaction if the transaction crashes my node? Well, if we have a standalone EVM, what we can do is we can just fetch the pre-state about the sender and the receiver, just those two accounts. And we can execute that locally in our EVM and then analyze the trace to find did we miss anything? Was there anything else we should have had here? And external references and fetch those and start over. And if the node crashes, then we have successfully reproduced the transaction. And for this, we only need a Web3 standard API without any debug specialties. Sorry. So I'm going to demonstrate quickly how we can do analysis of the jump test attack, which we were hit by in June 1. So I'm running this little reproducer here. I pipe in the hash used in the attack, the transaction, telling it I'm going to use my local EVM, not through Docker. And it basically sets the right fork rules for that particular block. And executes it and it has some intermediary traces here. We can take a look at those. So let's go directly and start for the final trace. I'm showing this in what I call the op viewer or retro mix, if you like. It's a remix like debug viewer for the JSON output format that I showed earlier. And this is a good start for analyzing what's happening in the transaction. So you can see this particular transaction. It does an Xcode copy. And the Xcode copy fills the memory with 5B. And it does it repeatedly. And as you can see, the memory is growing. And it keeps doing this for about 600 steps. I'm going to go a bit faster here. Until it has filled up the memory. Until it has filled up the memory with half a megabytes, all 5B, which happens to be a jump test. Then it puts some more code in there. And this looks like actual EVM code, 603565B. And all of you, I'm sure, recognize that that is the push one, jump, jump test and stop. So it just executed a create with that code. And as you can see the size of the create is the full half megabytes. OK, so now we know that the attacker is doing creates and he keeps doing it repeatedly. One part of the memory changes between each invocation. It's a little counter down there and I'll skip forward with. So it's all just create and it ends on create number one of five. It goes out of gas. So by this time you can be kind of have an idea. So it's doing creates lots of times with a large memory segment, totally filled with jump test. It changes one little byte each time. So obviously bypassing any caching mechanisms. So by reproducing it and viewing at the trace in this fashion, we can do a very quick analysis of what happened and we can benchmark it right now. It's running at 300 milliseconds. And if I compare that to. So this is the Evian get Evian with the patch applied after this attack. I can try it against the Evian without the patch for the jump test analysis. And as you can see, it took nine seconds. So this tooling makes it possible for us to do quick analysis and then to check, does this patch work and I can share it with the coworkers and they can try out various patches and see which one is the best. I can also run this in a weblike format and do all the same things and investigate other on-chain events. For example, the, sorry, which one did I take? That was the same. Parity vault attack. And there we have the Parity vault attack reproduced. And you can run it locally or you can check and annotate the trace of what happened there in the Parity vault attack. And for example, yeah, so here's the fated infamous delegate call and the null attack. If you want to, if you want to analyze that more in depth. So the Evian lab, which I showed you a part of, makes it possible to do some Evian assembly Pythonically and investigate these kinds of issues and yeah, dissect attacks on a really low level. We had two hard folks also and in preparation of those, we ramped up the testing. Introducing parameterized tests or generalized tests, which Dmitri talked about yesterday in the breakout room and also put it all into Hive. Hive is Peter Szilagis, super cool framework for running nodes in a black box fashion and just synthesized the environment, the genesis and the blocks and everything and then you can compare the expected post state after a sequence of blocks. And this makes it possible to run, it runs about 24,000 test cases against PyTheorium, Parity, Guth and CPP and runs at 24-7, 365. It removes the dependency of the developers to perform tests as part of the test process. So now testing can be a totally separate process, which doesn't really rely on the developers per se. The fallout, however, after the first hard, sorry, second hard fork is that we had a consensus issue, which was, yeah, definitely not what we wanted. Manually crafted tests are great, but there's no way to scale it due to the inherent complexity of the Evian. We can't just have enough people know that much about it to be able to scale it up. So we wanted more coverage for Byzantium and started looking at the fussing. One way of doing that would be to generate test cases randomly, execute them on each Evian and use this shared output format to compare the internal state after each operation and just repeat it. And this can be done fairly quickly, can do a couple of million tests per day if you use raw binaries and you can use these four clients. The second track is based on Libfusser, where we got in touch with the Guido Rankin, who has done a lot of fussing and is a real expert on Libfusser. So Libfusser is the core of American Fussalop, it's a fusser developed by Michal Szelewski. And it's a bit more sophisticated because it uses code paths and instrumented binaries to detect code paths for any given input and then mutating those inputs to maximize the code coverage. And since everything is instrumented and compiled into one big binary, it is in order of magnitude faster to perform these tests. So it can do about 100 million tests per day. And there was a spectacular and kind of unexpected success in this, we've had seven or eight consensus issues found. Most of them before the hard fork, one of them slightly after the hard fork. It has been fixed and patched and released. And the clients today, I would say, are more thoroughly tested than they have ever been in the history of Ethereum. And we are still running fussers 24-7 and it's been millions of tests done on the test earth and billions of tests based on Libfusser. Naturally, there can still be consensus issues or denial of service issues. And if that's a really, really concern of yours, then you should run multiple clients and try to detect mismatches. And you can use the debug method in Geth to find out if Geth has tagged one of parity's canonical blocks as bad. And key takeaway here is that all everyone here are targets for attacker. It's involved because it's important. So be paranoid and be proactive and work on improving the security and your resilience and how you can handle attacks. That's about it for me. Thank you.