 Let's talk. So Peter Szilagia. Good morning, everybody. So unlike many of the crypto projects out there, Ethereum isn't really an end-to-end product. It has always been a platform by developers for developers. And as such, our primary task is to make developers happy so that in their turn, they can make their end users happy. Now to achieve this, in my opinion, one of our most important roles is to either aid the existing developer tooling out in the community, or maybe fund or create new ones that the ecosystem is currently lacking. And my talk will be about one such tool that we wrote. But before we dive in, let's see what the actual pain point are. Usually, when developers approach Ethereum for the first time, they start playing around with remix, this browser-solidity playground. They get the hang of solidity. They start prototyping their contract. Now unfortunately, quite fast, they realize that developing in a web browser is cumbersome. So they switch to more sophisticated tools such as Truffle, which can do automated, repeatable testing and also aided by proof-authority chains or instant transactions. Now after developers actually finish writing their code, usually the procedure is that they deploy it on a test net, either ops or on an alternative. Because that's really nice. It's a real-world environment, many users, et cetera. Now the problem is that this perfect testing environment often goes belly up. And the reason is, one of the reasons is that the test networks can be really heavy, either due to spam attacks or to large projects such as Raiden doing intensive tests. These can be unstable from time to time, usually because somebody figures it would be fun to reorganize the chain. And lastly, it can often be unfriendly because you have this awesome project you want to deploy, but you don't have test ether. And that's annoying. And the truth is that if you are a small project, then running testing on the test net is fine. But if you're a bit larger projects such as Raiden or maybe have many projects such as Consensus or perhaps you want to run a hackathon, then the live test network isn't really ideal. So the ideal solution is actually to run your own network. Now this might seem like a good idea at the beginning. Since it's light, stable, you have unlimited funds, you can share with everybody. But when you actually start configuring your private network, horror strikes in. So it's actually quite a nightmare to configure because it has gazillions of different moving components. And so we've been working for quite a long on a tool called Puppet. And our primary goal with Puppet was actually to run the Rinkabit test net on it. But we kind of realized that it will be an amazing tool for at least we hope an amazing tool for other projects too. So we kind of polished it up and gave it out to the community. And for the rest of my talk, I'm going to do something really crazy. I would like to demonstrate what it takes to actually create an entire Ethereum network with bells and whistles live on stage. So, first up, if you want to actually start your Ethereum network, obviously you need to configure the Genesys block. The shared initial accounts, the balances and you have to define what pre-compiled contracts you have, what fees they have. Now there are different four crews that you need to take care of. And if this wasn't horrible enough, if you have to do this for the five primary clients, that's about 414 configuration values. And that's just a snippet from parity, it's really extensive list. So that's not really a pleasant experience. However, if we try Puppet, now Puppet is a command line tool, but kind of a command line wizard to help you. So the first thing, it just greets you with a nice message and asks what network would you like to manage? We'll just type defcon, that's a nice network. And then asks, what would you like to do? Well, we don't have network yet, so we don't cannot show network statistics, but we can configure the Genesys block. So let's just switch that. And what kind of consensus engine? Well, we have proof of authority for click, but since we want actually a cross client network, let's stick to ET hash. Nice, do we want to fund initial accounts? No. And do we want a specific chain, a network chains? Well, if you run a public network, maybe it's worthwhile. No, for now, we just do random. And that was about it. We actually managed to configure an entire Genesys state for five different clients without doing anything. Now, of course, if you actually want to run your own network, then the Genesys is just step one. You will need to get some nodes online, but one of the biggest problems that we see with people getting nodes online is that they have absolutely no idea what the nodes are doing. So you really need to monitor the nodes somehow and we really want to do that via an EatStats, a running EatStats, which isn't really easy to do, but let's see how Puppet can help. So what would you like to do? Puppet asks us, why we'd like to deploy a new network component? It gives us a choice. Let's stick to EatStats. And which server? Well, that's connected to a server. We have a server called DefConNetwork. It's a registered domain name. Work. Oh yeah, sorry. Connect to a new server. DefConNetwork. And yay, Wi-Fi works. Awesome. So we managed to connect. Yeah, that's why you don't do live demos. DefConNetwork. Do we trust this? Yes, we trust the remote host. Cool. Now what, where do we want to deploy ETHash? Since we want to deploy possibly multiple websites here, let's deploy it on port 80. And do we want to share the port 80 with others? Yes, sure. Now when we actually say we want to share port 80, Puppet will deploy a reverse NGNX proxy, automatically configure everything without us having to do anything. And what domain do we want to host this EatStats page on stats.defconnetwork? Okay, and what's the EatStats password? Hello, hello, it doesn't matter. Yeah, it's fine. Now it, in theory, Puppet now runs in the background, starts up entire EatStats and everything. And now, not only will it start it up, it also lists us that yes, we have a server connected, the IP address, and what services it's running. And now if you load up a web browser and look at the actual domain name, then hopefully, yep, we have an EatStats running. And you can see it is actually not an image, it's a live stats page. Okay, now we can monitor the thing, but we actually have to boot up the network. So let's deploy now a boot node. Yeah, that's sometimes the console is funky, but it will. Now again, we have a small scan so that we know what we're up to. And let's deploy a new network component. Yep, we want to deploy a boot node. And where do we want to deploy it? Well, usually you don't want your boot node to go down if somebody is dousing your website. So let's just switch to a new server. Let's call it boot.defconnetwork. And yes, we connected a new server, that's fine. We can manage multiple servers at the same time. Where do we want to store the data directory? Let's call it defconbootnode on the server, what UDP port one. We'll just go with the default configurations. They aren't even that interesting. And what do we want to call the boot node on the stats page? Let's call it boot node. That seems about as dumb as it can get. And again, Puppet does its funky magic in the background. And if we, I think we can even close this. Yep. And if we now checked our stats page, then yep, fair enough, we have our boot nodes registered. Maybe it's not a bit tiny, but we have a boot node running and it's immediately linked to the stats page. We don't even have to configure everything. Cool. Now we have a static chain that doesn't do anything. Obviously we need to mine on the chain to make it progress. So we can again ask startup Puppet, ask it to connect to our new little network. And let's see what it takes to deploy our mining node. Now we want to deploy a new network component. Let's deploy it a mining node or sealer. Again, miner kind of takes a kick out of the machine. So let's put it on the miner.defconnetwork machine. Yep. And yeah, yeah, where should we store the data? Defconminer. And where should we store itty hash tags? Defcon itty hash, yeah. And the remainder can be, the connectivity strings can be simple and we want to register it on the itty hash page, sorry, it's that page as the miner. Now what ether base should, ooh, that was fancy. What address should the miner use? We'll just copy paste this here. And what gas price and gas targets will just stick to the default. It's not that important. And again, Puppet is pushing out the mining node to our remote server. And it should finish booting any moment. It does a quick health check. It actually checks whether the ports are reachable. We have a nice dump of everything we configured until now. And if we look at the stats page, fingers crossed. Yep, we actually have already four blocks mined. And not only that, I'm really hoping that the boot node will connect soon to the mining node. But since time is limited, we'll just check it in the next slide. Yep, they've connected. Cool. So of course, now you actually have an entire network up and running. But if you tell your friends that here, I have this awesome network, they will just go, okay, and how do we use it? And then you kind of start scratching your head. So it would be really nice if, well, first thing first, you could see what the network is doing. So for that, obviously, we need kind of a block explorer, which currently, for example, there is no open source block explorer that supports Go Ethereum as a backend. But however, there is one that supports parity. Now, can we actually deploy a parity node easily? Well, sure, let's try. We want to deploy a new network component. Let's do an explorer. Where do we want to deploy it? Well, we will put everything, all our websites on the same machine. So let's use DevCon network as the machine. Yes, we want to share port 80. And let's do explorer.devcon.network. Where should the data be stored? DevCon Explorer and the defaults. Let's call it an explorer on the stats page. And again, it's pushing it out fairly fast, hopefully. We're just waiting for the node to finish booting. Yeah, I'll just close it. And let's see, does it actually work? Fingers crossed. Yep, we have our block explorer up and running, and we have already 11 blocks mined. And we can check that our miner already has 42 ether. Oh, that's a nice number. And of course, if we check the network stats, then again, we already have three machines running without actually configuring too much. Okay, now we know what the chain is doing from the inside and from the outside. Can we ask our friends to use it? Well, sure, but it's kind of hard to use it via a simple client. So let's try to give them a web wallet. Now, of course, everybody knows the most sophisticated web wallet currently out there is MyEtherWallet. So let's just deploy MyEtherWallet to our little custom test network. So let's just deploy a new network component. We want to deploy a wallet to our website server. Yes, we want to share port 80 and call it wallet.devgonnetwork. Where should we store them? Data, wallet, and just spice up the defaults a bit. And let's call our wallet on the stats page. So since that runs also back and no, let's just call it wallet. And again, Puppet is just configuring everything in the background for us, and it is pushing out the data. And if we check our wallet now, this is something that takes usually the most time to boot up. Yeah, boom, we have our wallet. And as you can see, it is actually configured for the DevCon network. It deployed the backend node to connect to. We have the front end and everything seems to work nicely. Cool. So that one kind of was easy enough to deploy all the components, but if you share this with your friend, then he'll say that, okay, I want ether, and you will be the one who has to give them ether all the time, which gets boring really fast. So we really need a faucet that you can just start up and it just runs there and everybody can request ether. And to do that, we have actually, Go Ethereum has a faucet built in or support for it based on the lite client. So let's ask Puppet to deploy that. Yep, we want to deploy a new network component. And let's do faucet, that's our sixth on the list. And we want to deploy it again onto our web server. We deployed everything there. Faucet, DevCon, that work domain name. Oh, yeah, sorry, port 81st, and then faucet, DevCon, that. Oop, thank you, ooh, nice. Okay, how many ether do we want the faucet to release? Well, one ether is fine per 15 minutes. We want three tiers. If you wait more, you get more ether. Do we want recap chat protection against bots? No, it's a test network here for DevCon. We don't care about robots. Where should we store the data? DevCon faucet sounds about right. And let's just pick a different port for this one. What should the faucet be called on the stats page? Oh, faucet. Of course, if I run a faucet, I do need a private key so that the faucet can actually fund its stuff out of. So we have a private key pasted in here. We have to unlock it for the faucet. Boom, it's unlocked. And do we allow unauthorized requests? Well, since it's a DevCon and we don't care about the lifetime of this whole network so much, yes, we authorize anyone to request funds. And let's deploy it. I'm really, ooh, it managed to deploy. That was nice. And now comes the moment of truth. Can we actually, as you can, again, we have a nice dump of all the configurations. Now, can we actually load up the faucet? Yep. Ooh, why is it? Yep, we have a DevCon authenticated faucet. And if we request funds now into an ether address, give me one ether. Yep, the faucet accepted our funding request and if the miner manages to mine us a few blocks, yep, we just mined it and we have our account funded. And we can also check the stats page that everything that we deployed until now have indeed appeared on the stats page. Now, finally, you deployed everything but you don't even know what your genesis block is. You don't know how to connect to it and it's just a mess of different websites and different domains. So you really want to deploy everything on a single host or at least on a single place that your users could find it. And the only way to do that is to have a nice dashboard. And that is actually the last thing that Puppet can do currently. And let's just try to do that. We're just having a nice, yep, deploy a new network component and the final piece of the puzzle is a dashboard. Let's deploy it to our website and let's use the root domain DevCon.network for it. And then it will ask, which services do I want to list? Yep, I want to list the stats page. I want to list the block explorer. I want to list the web wallet and I want to list the faucet. And do we want the Easter secret to be public? Yes, let's make it public. And Puppet, again, crunches. Everything makes the configuration files, deploys a web server for us and if everything worked correctly, I should be able to load up a nice dashboard. Yep, and we have the first page, we have the eats stats on the left-hand side. We have a nice sidebar anywhere we can switch between the eats stats. We have the block explorer that we just configured. We have our web wallet that we can play around with. We have the faucet to request all the funds that people are requesting funds. Amazing. And beside all of these services that we configured, we also have detailed guide on how to connect Go Ethereum in archive node, full node, live client, or embedded for embedded machines. We have details on how to connect Mist and Ethereum wallet. We have details on how to connect Android and iOS devices. And finally, if you really don't prefer Go Ethereum as your client of choice, then we also have details on how to connect C++ Ethereum, Ethereum Harmony, Parity, and PyEthap. And ladies and gentlemen, that was the Puppet network manager. Thank you very much. Good morning. Let me see, where are we? Next up, we have Marcus Ligge with his presentation and introduction, sorry, to WallF, the Ethereum Android wallet. No, that's not my screen. And that's the wrong presentation. But my screen is connected. Yes, that's the right one. Yes. Greetings each and everyone. Great to be here, like bright sun, bright minds. What a great combination. In this talk, I want to present to you WallF, the Android Ethereum wallet. And we will focus on three things, the why it is, what it is, and where do you find it, basically. As many of you, I'm really fascinated by the idea of a world computer. Like, I love this world. I love computers and the combination is like really awesome. And I also think that we can protect this earth by managing the resources way better by using Ethereum. As we are at an Ethereum conference, I don't have to preach to the converted and tell you why Ethereum is awesome. I just want to add one thing. What I really like is the upgrade path to proof of stake, so that we don't have to burn so much energy to have the consensus that we can have it without hurting the environment so much. I really love Bitcoin for the spark it brought to the world, but I really don't like it for how much energy it consumes. And I want to quote the great Greg McMullen from a conference this year because it's often forgotten. Blockchain is about people. The technological parts are really exciting. The tools are powerful. The engineering challenges are huge, but first and foremost, everything we build is about people. One really nice way to bring Ethereum to people is Android. This is a screenshot from this year's Google I.O. Like, there are two billion monthly active devices, so it's a really nice vehicle to bring Ethereum to these people. And it's also good for emerging countries because often you see people don't have PCs anymore. They use phones. I'm doing mobile development such a long time. I have a really dark history of Java for micro editions. And Android was basically my savior and saved a lot of problems, so I'm a really huge Android fanboy. And one of my apps is a passbook for Android. And there, it's a nice connection to Ethereum because if you treat tickets not as passwords as it is currently, but as tokens, you get really, really nice properties. I gave a talk about that at the Ethereum office in Berlin. If you want to look that up, please do. But when implementing it, I found out there are some building blocks missing to build what I wanted to build. So that's why Wallet. I needed to build that first to basically get where I want to go. And why? One big reason why I didn't want to use wallets that are out there is they didn't fulfill my constraints. The first and most important constraint I had to involve it is that the keys have to be in the hands of the user and not on some server because I think that doesn't really fulfill the purpose. And I could not even sleep having all these keys from other users and I really like to sleep. So, and you think that's quite obvious that the keys should be at the users, but then you see the most used wallet currently, something like that, and you obviously see the keys are not at the users and then you have basically centralization again that's really not what we want. And I also think it has to be liberal software and not closed source software because even if the keys are in the hands of the users but you have closed source software, the keys are not really in the hands of the users. And it's not only small parts like some apps have only small parts open but like there has to be the full app has opened. And I want to get even further, not only the app has to be open source but also the platform has to be open because it's basically the weakest link in the chain like on iOS, even if the app is open, Apple could steal all your keys. And in Android there are no movements to really open up everything like for example, purism, a phone where like there's no more closed block or nice movements like the fair phone. So let's dive into Apple to have a drink a bit. That's how you're greeted because I think the user experience is really important and often missing in tools today. So basically I guide the users in that he can get first his funds. And after pressing there, you see an ERC 67 QR code so you can transfer funds to that. For example, from the nice Go Ethereum faucet. And then after you transferred something there, you see the incoming transaction there. And you might have wondered a lot of other wallets ask you for a password first. But I also think that's a stupid thing because it really hurts the user interface because often you just want to try out things at the beginning. You don't have accounts that have huge value. You just want to try out things. But then basically things force you to enter a password. But I think the user story should be different. The user story should be that you basically can try out things. And when the account gets real value, then you secure it. And then I even think you shouldn't secure it with a password but you should use a real hardware wallet because that's real security. Because a password, we can discuss it in a break, doesn't really add much, especially on Android where you have a sandbox. You really need that on a PC where every app can see the data of another app. But on Android all the apps are sandboxed so a password doesn't really help and often gives a false sense of security. Another thing you see in the app there is the scan button. It's the floating action button from Material Android design and it brings you to different places. You can scan ERC67 from, for example, another app. You can import a JSON UTC key or a raw key or a plain address. Unfortunately, a lot of people don't use ERC67. So raw addresses are like really a pain but they're out there. And you can scan signed and unsigned transaction RLP that helps for offline transaction modes. I will talk about it a bit later. A bit more filled, it looks like that. Basically, on the left it's the incoming row under the incoming button and on the right is the outgoing row because I love for it to be a balance. I think value should not pile up, value should flow in and flow out. I'm breaking a bit there with the user interfaces of other wallets but I think that's okay. I try to keep a concise visual language. There you see different transaction states. You saw before always at the icon there. There's unsigned transactions. When you don't have a key, you can have watch only accounts. You have signed and confirmed. But for example, not transacted because you're offline. Then you can even delete them but only locally. They might be already sent. And then full black is signed and confirmed on the blockchain. You cannot delete them for sure. Let's look at the navigation drawer. You can edit your account, very important. Always your keys, your account, not your keys, not your account. Import and export keys. Some settings, offline transactions, very important and debug at the moment. It's alpha so debug is very important. So you can see the Go Ethereum. You saw this left icon before. I don't really use and like the blocky icons. So I really want to use Peter Szilagyi's flame IDs. But they're same thing with the building blocks because to calculate these flame IDs takes seven seconds under 1070 GPU. So we need true bit and stuff first to get that stuff secure because I cannot calculate that on the phone. Yeah, here you can edit your account basically with the camera at a watch only account with the key generated key. And treasure very important. I think that's the first wallet for Ethereum treasure support. I think hardware wallets are the key for security and are also an option to be a metaphor for users because they know if they have a hardware wallet, it's really their key. Because only if the key is on your device, you're not really sure if you're in control. But the hardware wallet can be a nice metaphor like UX metaphor. I tried to use common symbols. These are from material icon design to always be concise all over the app that you can educate users that they have a nice experience. So scan the code offline transactions, treasure, share, private key and copy some content. That's all over the place. It's always the same. That's how it looking when you add a treasure device. Then you have to enter your pin. I don't know if a lot of people of you use treasure so far that you don't see the numbers because the numbers show up on the treasure so that your device can never access it. You can, that's optional, add a password. And then you also have plausible deniability. Really nice feature from treasure. Then you can select a account from the treasure with the derivation path. I have to speed up a little bit. Sorry for that. Yeah, you can export the keys. Important import, export keys. Always check for that. And here very important settings. I always use in my apps the day and night theme because it's also you don't want to be blinded at night. And at day you want to be able to read something. So in all of my apps I use day and night. And on the bottom you see you have the option to enable light client. I thought in the beginning you always have to have light clients but light clients unfortunately are in a very early stage and not really usable. They make your phone really warm. That might be nice in Berlin but it's really bad here where it's so warm. So at the moment I have it as an option but later on I want to have it on by default. But currently it's not having a good user experience and I want to get the user experience nice first and later on if light clients are really, really usable then I make it on by default. Yeah, these were the problems with but the go team, big hands to the go team because they are awesome. I report a lot of bugs but they get fixed. Yeah, that's really good. And we need these light clients because currently the world leaders, it's crazy in what the world we are living in and the problem is with, we have a lot of centralization in Ethereum also and we need to fight that because all the infrastructure we have currently and we take for granted could be gone in a second. It's really bad. And also please all, I want you all to survive. Please install the offline survivor manual just in case. Yeah, and also please everyone activate light surf because at the moment light clients also suffer that a lot of the SNO incentive structure get and people don't activate it. Please activate it on all your nodes so we have a better experience especially on test nets. So I have to skip a little bit stuff. All the tokens, I support ESC 20 and all the token support is coming from MyEtherWallet. If you want to add your own token, please make a pull request against the disrepository and also a big shout out to the MyEtherWallet team is just awesome what they are doing there. What you see here, yeah, clap for MyEtherWallet. Yes, sorry. But we have to hurry. Yeah, we should clap longer but we have to hurry. Here it's very important. The keys are in the user data directory and the cache has all the GlideCline data and you see how big the GlideCline data is. Like it's 500 megabytes. So you can safely delete GlideCline data but please never delete the user data because they are your keys and I don't have them. I cannot secure them even if you ask me. I have no chance. I don't have them. Very important where to get your stuff from. There's a trade-off between convenience, Google Play, Google Play, very convenient but very centralized and not free and GitHub is just like in synonyms, also centralized but I mean you can, better it's always built from source and something in between is asteroid, also a really nice project. It's in between basically. And don't get it please from, because I didn't get the INS domain. It's a very sad story. There is Kotlin. Like everything is written in Kotlin. There is Kotlin stuff emerging. I wrote a library, Ketherium it's a Kotlin, Ethereum library and I have to stop now. I have a break out and Android things. Yeah. I have a breakout session at 1650 in the breakout room so I hope to add some stuff then there. Thank you very much. Here's the follow-up. Legity is my thought. Okay. Thank you. Thank you, Marcus. Next up on stage we have Jared Hope with presentation status, Ethereum at the edges of the network. Yeah. Hi, I'm Jared. I am the main organizer and thought leader on an open source mobile Ethereum client called status. Show of hands who's actually heard of us. Amazing. Cool. For those who don't know, status is something like a hybrid instant messenger and mobile DAP browser. And really we have one goal and that's to take Ethereum technologies and put them in the hands of people. What I love about the Ethereum community is we've taken this fervor for crypto but we've tempered it with pragmatism. And this has made blockchain technology palatable for organizations, banks and governments alike. However, this pragmatism has been somewhat of a double-edged sword when it comes to taking this technology and putting it in the hands of people. And I'd like to start off with a short story that I think illustrates this point. So, on October 1st, an autonomous community of Spain, Catalonia, held a referendum of independence. Leading up to the events, they used a superb decentralized file storage technology called IPFS to organize their votes. However, non-technical people were using a centralized HC proxy and access this information through the comfort of their own browsers. Leading up to the events on the September 26th, I believe it was, Spain issued a blockade of 140 domains, one of which was this gateway. Politics aside, disrupting a vote shouldn't be that easy. At the same time, the tour projects saw a surge in the downloads of their messenger. This is the Catalans hardening their communications. And one of the many things we can applaud the tour project for is that they understand they need to package and make their sophisticated software easy for the average user to use without compromising on the integrity of that software because when push comes to shove, decentralization matters. And so, how we package, disseminate, and present these technologies to the end user enables them in ways that we haven't been able to do before. And this is really the core problem that we aim to solve at status. So, since our last DevCon, we spent a lot of time thinking about the overall user experience and design of the perfect Ethereum client. And nothing matters more than that first run. Together with our community, we've discovered what we think is the typical emotional narrative that a new person coming into the crypto world is likely to experience. And on that first run, we want to make sure they feel safe, in control, not overwhelmed, while at the same time connecting them with their goals and allowing them to explore Ethereum how they wish. While at the same time, we don't want to overwhelm them or demand information from them unless it's absolutely necessary. For example, we don't need them to back up their key phrase until they actually have real value in their accounts. And at which point backing up their key phrase does become completely vital and they have a reason to do so. And so that's when we educate them. We're also introducing an Omnibar Interstatus, which means that you can access anything you want to do with an Ethereum within just a few tabs, whether that's accessing dApps, finding your friends, opening new tabs or signing a transaction. We've also worked together with our community to develop what I think is one of the most visually stunning and intuitive wallet experiences that frames your digital assets perfectly. We've also wanted to make signing transactions as less intimidating as possible, while at the same time bolstering the protection against phishing attacks with a signing phrase of three words. Why we're building out that signing phrase? We actually found out that many of our users want to store sums of value and control them from their mobile phones, which are much larger than what they would fit in their normal analog wallets. And we realized that a software key pair is just not going to cut it. So today I'd like to introduce you to our new initiative, which is the Status Hardware Wallets. This is an open source Java card that allows you to take the trust, safety, and security of a hardware wallet but use it on the go. It has two modes of operation, the first of which you can sign transactions directly on the card of arbitrary sizes. However, this requires vendor-specific hardware, namely to support Ketchak 256 and proper EC point multiplication. It is in the 305 spec, but finding a vendor that properly supports it is somewhat an interesting problem. And also to support Bluetooth as another communications protocol, aside from NFC and Contact. The second mode of operation of which signs transactions generated off-card, all signing is pin bound. However, we do support key pairs and HD wallets. With the HD wallets having two extra features, the first of which is to store and export your whisper identity. And the second of which is to actually make one derivation path pinless. And therefore this account becomes balance bound but allows you to have frictionless transactions. But signing transactions doesn't really mean that much unless you've got someone to send transactions to or something. And so with Discover, we really want to connect you with people and DApps and communities. So we're solving a research problem in a decentralized manner. And this is what we do with Discover. Discover is basically a naive epidemic protocol in which users publish public statuses with the use of hashtags. And this is propagated to their friends. Every user then collects a cache of messages that they've seen. And periodically they generate a preference list and then share that subset with all of their contacts. This preference list is generated by a bunch of different waiting factors. For example, if they're mutual friends, but they're like how recent it is if they're online. Or there's a bunch of others. Oh yeah, like if they've been chatting with that person or interacting with that DApp for a long period of time. And if they trust them. And so in status, we're actually going to be building multiple layers of trust. And this is the only the first layer which is basically automated because when a status is shared, it is signed by the propagator. This allows us to build chains of propagation to see how far something's propagated, who by, and the same with moderation and reporting. So as you may know, status is built entirely on Ethereum protocols. And therefore we use Whisper as our messaging transport. Whisper has amazing privacy features built into it. However, it doesn't, it isn't without its problems. And one of these is essentially both peers need to be online when communicating. In normal usage, a sender will send their message to a recipient. It'll bounce around the network, eventually arrive at its destination. And this recipient will send an acknowledgement of that message. However, if the recipient is offline, the sender will send out a message and periodically resend it. But the recipient is both unaware that they've received a message and they need to come back online and wait for the sender to rebuild their chat history. So we are introducing status nodes which act as offline inboxes for Whisper, as well as helpers for external services, such as push notifications. It basically works on a promise challenge deposit. And it allows the sender to send these messages out. They get collected in these nodes. The recipient can be then informed, come back online, and rebuild their history with a node, even if the sender is not available. Well, you may be wondering, well, who is going to run these nodes? And of course, if somebody has a server, they can run it in a headless way. But really, we think we can do a bit better than that. And so we actually think you can do it because we're actually expanding our platform reach. So we're not targeting Android and iOS anymore, but we're also targeting Linux, Mac OS, and Windows. And so with this, we're going to make it dead easy for you to set up a status node and integrate with external services, whether there are other chat protocols or push notifications. And at the moment, we have an internal friendly competition. We do build status on a single code base and that rests on top of React Native. So on the more developed version there is actually using React Native Web. And the other one is another fork of a canonical project where we're actually building React Native from the ground up for desktop based on QT. And currently we have 60% component coverage on that. So exciting times for us. In addition to this, another problem that we've really faced in growing our organization is that the talent pool in the crypto community is exceedingly small. So we want to take the strengths of being an open source project and help incentivize contributions. And this is what we're doing with Open Bounty. This allows you to take any GitHub issue and create a bounty for it that anyone can then contribute to, whether it's F or any ERC-20 tokens. But we're actually taking this a step further from just the general mechanism which we've had for a while. And we're actually building out our talent scouting and human resources around this so we can help other decentralized organizations build their software just like we are. And in fact, we also have a million dollar bounty coming up to help other organizations get involved. So please come join us at openbounty.status.im if this is interesting to you. In terms of our next steps, well, now we need to focus on optimization. We're deploying our security audits which allows us to move into production. We're also supporting identity standards and we're experimenting with the Swarm Messaging Service PSS for more convenience. So that's it for me. Thank you so much. And I hope you're all using status. Okay, next up this morning we have Felix Lange with Evolving Dev P2P. Hey, guys. So many people here. Yeah, I'm Felix. What is it? Should I use this? Okay. Hi, guys. So I'm Felix. I work for Ethereum Foundation on the Go Ethereum client. And my role there is mainly bug fixing and feature development, but I guess there's a lot less time for new features lately. And my passion in this project has always been, you know, like taking care of the way that nodes talk to each other. And this is also what I'm talking about today. So Dev P2P came into existence, I guess, about three years ago. And at the time, the vision was to, as you can read there, provide a lightweight abstraction layer that provides low level algorithms, protocols and services in a transparent framework. So, you know, like it was a pretty grand vision. In 2017, though, Dev P2P is just this thing that you need to implement to talk to the Ethereum blockchain. And it's part of all known Ethereum implementations. So all of the six, seven implementations that are live on the network have an implementation of Dev P2P and all of the stuff that's in it. And there have been very few actual protocol changes since 2014. So if you wanted to, you could count them on one hand, basically. Dev P2P has a bunch of elements to it. So the first one is the node discovery protocol, which is a way of finding other nodes to talk to. Then there is the RLPX transport protocol, which is what it's spoken on the TCP connections between nodes. And finally, there's an application layer protocol that sits on top of the RLPX transport protocol. And this one is somewhat confusingly also called Dev P2P. So both the overall system and this particular protocol are called Dev P2P. Let me just walk you through the protocol that's in use on the network today. And then maybe we'll come to the part that can actually be improved about this. So this dark circle there, that's us. So that's where node that wants to connect to the Ethereum network. So how do we do that? So we'll join the DHT first. And the DHT is basically the part that is this thing where you can find the other nodes that are on the network. Yeah, so there are some other nodes. They are also registered there. And then, basically, we walk the DHT at random to find someone to connect to. And then we try to establish a TCP connection to them. And the TCP connection might actually fail. So this happens quite a lot because the node might no longer be live, or it might be too busy handling other connections. But let's just assume that it works this time. So once the connection is established, we exchange capabilities with the other side. So basically, this is now the part where the DevP2P application layer kicks in. And in this particular case, we see that the shared capabilities are... There's just one shared capability, and it's the ETH capability in version 63. So now once ETH version 63 is running, we can exchange information about the blockchain that we're both on the network. That we're both on. And once that matches, we have a new PR. Yeah, so that's the current system. And it's not super efficient. And well, there's a whole bunch of details that I haven't really talked about, but I guess you kind of get the idea. So what can be improved about this, though? So first of all, it's kind of annoying that there are so many round trips just to figure out whether someone is on the right blockchain. I mean, I guess that's kind of obvious. And then it would be really nice to just sort of know that, like before even connecting, maybe. And then another issue is that the whole system is basically frozen. So making any change to any of the protocols requires really tight coordinations, requires implementation consensus, and any change that we make needs to be backwards compatible. And to achieve upgrades at all, what we've done in the past is we've made all the upgrades backwards compatible, and we've tied them to Ethereum mainnet hardforks. Because, well, everyone has to upgrade their node anyways. And then once the hardfork is successfully launched, we can start phasing out the old stuff and just only speak the new protocol. But hardforks don't happen all that often, and we'd rather make changes on an accelerated schedule, but it's really not that possible. And then finally, because Node Discovery only relays information about the RLPX protocol, there's really not, there's like any sort of room for experimentation because, well, we're essentially stuck with RLPX and the crypto system that it uses. So improving these things is the Node Discovery version five effort. And with Node Discovery version five, we want to make two changes in particular, or rather we want to achieve two things in particular. So the first one is, we'd rather be able to find nodes more efficiently. And then the second one is, we'd rather know more about those nodes before we even connect. So the first part of our solution to these issues is called ENR, which stands for Ethereum Node Records. So earlier in the V4 overview, you saw that the DHT holds all those E-node addresses, and an E-node address is really just a public key and IP address in two ports. Ethereum Node Records can hold arbitrary information about a node. So that's the main difference. So this arbitrary information, that can be information about the capabilities of the node. It can be information about other transport protocols spoken by the node. It can be initial key material for those transports, anything really. Like as long as it fits into 300 bytes, we're basically like almost anything can be relayed there. And the limitation of 300 bytes is important because ENR is a separate format, a separate spec even, and it's not at all connected to the DHT. So you can relay those records through any other means if you want to, including say a DNS record or something like that. And then finally, Node Records are signed and also versioned. So if you have two versions of a record that describes the same node, you can determine which one is newer, for example. And we think that ENR is a good solution for the transportability problem because again, information about arbitrary transports can be relayed through this protocol. And well, yeah, because there's just a lot more room to put information about anything. So we still need implementation consensus though because in order to be able to talk to everyone, everyone has to agree on what the language is that they're speaking to, that they're using when speaking to each other. And it is very likely that for a considerable amount of time the lowest common denominator will be RLPX. But eventually, once ENR is launched, we can actually try out different transports, find a viable alternative and then maybe at the end of 2020 like delete RLPX. So in order to get ENR launched though, we need to upgrade the discovery protocol. This is something that needs to happen once and it will be a backwards incompatible upgrade. So likely the way this will work is that the Discovery version 5 DHT will be a totally separate DHT that will run in parallel to the current system. And because this is kind of a unique opportunity for us, in addition to just including support for ENR, we want to make a bunch of other changes to the protocol. In particular, one problem that the V4 Discovery protocol has is its reliance on absolute time. So maybe some of you have actually experienced this. So if your clock is off, let's say by two minutes, you won't really be able to connect to the network. And we've worked around this by alerting users when their clock seems off, but that's a really ugly workaround. And you know, we might just fix it in a protocol this time. And then with many of the nodes in the DHT, you'll find that the information that's listed there isn't very accurate. So you might not be able to connect to this node at all even though it's listed there. And in V5, we want to introduce this concept of endpoint proofs where the DHT ensures that if a node's record is listed in the DHT, there's a pretty fair chance that you'll actually be able to talk to it. And then finally, we have another improvement which can be considered an extension of the DHT protocol. And this one is about finding nodes more efficiently. So in a classical DHT, so to say, the DHT is an index of nodes by their public key. And it maps public keys to node endpoints. But well, so the nodes in the DHT, though they're not in any kind of useful order. So you'll find an Ethereum mainnet node next to an Ethereum classic node, next to a node that doesn't really participate in the Ethereum blockchain at all. And there isn't really an efficient way of knowing about just the nodes that you care about. And contrasting that, the topic index that we have in mind is sort of like an index of all the nodes by the topic or the service that they are providing. I don't have a lot of time to really go into the detail of how this works, but I can give you an overview of the design constraints that we set when making this protocol. So the first big constraint that we had is that we don't want to split up the DHT because with DHT, bigger is better. Fundamentally, the security of a DHT depends on the number of participants in it. So you would want to always have a very large number of participants. And another design constraint we had is that these topics kind of have to scale to an arbitrary number of participants. So there might be topics that everyone is advertising, like everyone who's in the whole network. But then on the other hand, there might also be topics that are very small and only advertised by let's say five or six different nodes. And these topics shouldn't really compete with each other, so you should be able to resolve both equally quickly. And then finally, with all of these systems, there's always the danger of people spamming it with arbitrary registrations that nobody ever cares about. And those particular registrations should not drown out the actual useful ones that you really want to see. And to combat many of these attacks, the topic advertisement protocol includes this thing called advertisement inertia. So this is really just an artificial delay enforced by the protocol before a certain registration for a topic can go live. And we feel that in addition to maybe reducing misuse in addition to combating these attacks, this also reduces the misuse of the topic index. Because fundamentally, topics are meant to be used for announcing big decisions way ahead of time. And those big decisions can be something like which blockchain you're on, or which chart of the blockchain you're on, or well, things like that. And not so much, let's say, the URL of the video chat that you're just starting. So we really want those things to be used for facts that will have a bit of a bigger meaning. So to recap, the Node Discovery version five is about finding nodes more efficiently and knowing more about those nodes before we connect. A prototype of the system, although without the ENR, has been in use by the Gathlite client since early 2017. We're still working on the EIPs, so nothing is published yet, but once it is, we will have a separate spec for ENR, a document that goes into detail about the semantics of the topic advertisement protocol, and finally a description of the actual wire protocol that's spoken via UDP. But there is really nothing set in stone yet. So if you feel like there is a certain change that absolutely has to be made, or a certain feature that should really be included, or not included, just come talk to us. That's it. Up next on stage, we have Pavel Belica with EVMC Portable API for Ethereum Virtual Machines. Hello, everyone. Yeah, so I'm Pavel Belica, and this talk will be about EVM and EVMC, which is a portable API for Ethereum Virtual Machine. I'm a software developer. I'm specialized in C++, and currently I'm working mostly in the CPP Ethereum project. I'm also the author of EVMJIT, which is an alternative EVM implementation that translates EVM bytecode to native machine language. And I also try to come up with the APA for EVM that is called EVMC. So this talk will have two parts. First one, I would like to explain what exactly am I talking about and what I mean by EVM API and EVM interface. And in the second part, I would like to show what have been done so far, what we want to do in the near future, and explain some design decisions we made so far to be able for you to better understand why things look as they look at the moment. So EVM, the Ethereum Virtual Machine, one of the most important components of Ethereum software and Ethereum ecosystem. So it's, in short words, a virtual machine that actually can execute small programs or small scripts called smart contracts. And yeah, where it is. Every Ethereum client, at least full nodes, have EVM somewhere inside. Usually there is one implementation of that, but we have some examples where EVM clients can actually have more of them. And CPP Ethereum is one of the examples here. But the problem I would like to address is the fact that EVMs are somehow embedded inside the client. You can have more or less indirect access to it, to EVM, through JSON RPC, some test RPC systems, VM tracing, storage backlog, and so on. But what I try to address is I would like this composition to look more or less like this one. So this thin layer that actually connects EVMs to Ethereum clients to be very well specified, very well documented, and to be also usable from different programming languages. Also, what we can add to this scheme is to be able to actually plug in the same EVM implementation to different clients. Okay, so EVMC is one of the possible solutions to this problem. And it's exactly what I meant before. It's EVM API that uses C language to connect these two now separated components, the EVM, Ethereum virtual machine, to be connected with Ethereum client. And why C language was chosen? Not because it's the most beautiful one, but it happens that C is actually accessible for many programming languages. The obvious examples are C and C++, but for many popular languages that are around, you can actually at least use some C libraries and use execute functions from C libraries from that languages. I tried all this stuff with Go using the CGO tool and also in Python using CFFI library, but I'm sure there are other examples when you can at least use C libraries in more high-level, more abstract languages. And the second important part of that is we want to have polymorphic interfaces there. So we would like to be able to use different, to switch between different EVM implementations and runtime. Not that we want to build a client with this one and decide on the build time what actually implementation would like to use. Yeah, we want to have a switch that the user can actually use to decide what kind of backend they want for its task. And third important part we took account of to make some design decisions is composability. So the composability means actually we can do something like that when having some concrete implementations of EVM we can add more and more layers on top of that that actually delegate the execution to the lower layers, but on the upper layer you can make some additional decisions where actually you want to send your code to. So for example, if we consider interpreter and JIT like EVM you might want to actually have a top layer that actually decides if the code should go to the JIT one or to the interpreter one. And the top layer can actually, for example, count the number of executions of particular code. If we have some hot code that is executed in many transactions you might want to actually translate that using the JIT EVM to some native machine code and speed up the execution. But that may not have sense if the code is not frequent enough to actually pay the cost of overhead of doing the translation up front and not having, yeah, may not make a lot of sense. The second example of such composition can be having actually different languages in smart contract. If we consider the proposed E-WASM and EVM 1.0 we can just add very simple layer on top of that that actually can recognize if the smart contract uses the web assembly like language or EVM 1.0 byte code. So how actually this EVMC looks like. This is actually a single file, a single C header file and includes declarations and functions and structs and also all the documentation is this file in the form of comments. So this is actually the only source you should care about and I paid attention to actually have good enough documentation to understand how it works just reading the single file. And at the moment this is part of my EVM G project where as long as I am experimenting with that and the API is not finished yet, it's included in this project. Okay, so the whole design has some kind of two parts, two sides and one is related to the client, to the client side and one is related to the EVM itself. On the client side what have to be done you need to implement some context class. And context class provides virtual methods and can answer questions coming from EVM. And these questions are something like get me the balance of given account or get me the storage at given storage slot for given contract. All this information cannot be provided to the EVM upfront because we don't want to send the whole state to the EVM to execute smart contracts. But the EVM need a way to actually extract this information on demand. And on the second side on the EVM side there is EVM class and EVM class it's quite simple. There is a way to actually construct the EVM instance and there is a way to destroy it and the core function is actually execute function. When the information what is to be executed is encoded in a message object and also the context is provided for the execution and EVM uses this context interface to ask for more data if needed. So in case you would like to implement new client but you don't want to in the same time implement EVM there you would like to use some of EVMC compatible EVMs available what you have to do. Your job is to implement the context class and there are eight virtual methods that have to be implemented and you have also encode your information what to execute in the message in the message structure. And if you would like to for example implement the EVM but you don't care about the rest of the Ethereum client like network stack not storage database and so on all you need to do is to implement the create and destroy pair of functions and the execute one. So not to be confused this design operates on the object oriented concepts but on the way you will do that you will have to translate it down to the C so it gets more obscure and complex. So what we have so far with this so as I mentioned at the beginning the C++ client have actually two EVMs and one is a classic interpreter and it actually does not use the EVMC but we plan to do it in the next in the near future but EVM JIT the JIT like EVM uses the EVMC interface and it's it's compatible with the recent hard fork of Byzantium there is also the HERA project that a prototype of Ethereum client with EUASM backend and there is also a prototype of EVM implemented purely in C language. I also prepared some time ago a prototype of GEF with the EVM JIT plugged in and the Python client with EVM JIT plugged in. This still needs some work and it requires updates to the current status of the code but it's quite fun to play with that and what we want to do next the missing piece of EVMC API is it's VM tracing and this is a showstopper for a moment because we cannot replace existing VMs completely because this missing feature is important in other places. We would like if this is in place we would like to move the CPP interpreter to use the EVMC interface as well and I also plan to release experimental GEF with EVM JIT as a virtual machine there recently also someone considered to using this interface together with a fast testing project. Okay that was all from mine thank you for your attention and in case some questions I'm available for yeah for whole DEF CON around thank you. It's not on hello it is no okay up next we have Dr. Greg Colvin with the EVM cleaner, meaner and closer to the metal. Don't clap yet getting the equipment set up on Greg Colvin we discovered we can't play videos off the USB stick from the back so I have to control my laptop from up here I hope it all works good morning that's sort of pretty but it has nothing to do with what I'm going to talk about. This is highly advanced technology highly skillful technologists I think I was supposed to be here last night so we could work this out but our dinner was like an hour late so they kept bringing us free beers to placate us I usually drink one beer a week so three beers in one night is like a bender for me no that won't help it's live okay they put it in a different mode there's a little clicker okay I'm Greg Colvin I spend my time for Ethereum working on the virtual machine working on improving its performance and working on designs for possible successors to what we have in order to solve some of the performance problems that we're running into if this works there will be another slide how about that the first problem you run into in any sort of optimization work according to my old friend Jerry Schwartz is one all benchmarks are bogus you'll never have a set of benchmarks it actually represents the real world but if you don't have benchmarks you will just go in circles you'll you'll never make progress so the benchmarks I'm working with are a few algorithm kernels that are relevant to what we're doing RC5s an old and useless cipher but it's a good example of a cipher that uses a lot of 32 and 64-bit arithmetic a lot of complex logic and Blake 2B is still an important hash function it's also a lot of 64-bit logic Blum Blum Shub is a cryptographic random number generator I think it's one of the slowest in the world it operates on big registers so we can use the 256-bit registers of the VM effectively and EC mall also can use big registers effectively and then I have a few tests of individual EVM operations and those are small EVM assembly programs to try to isolate individual operations and this is a graph of what I got with the whole thing and it's a bit complex but what you can see down along the bottom is the different benchmarks and along the right are the first three are some major clients EVM is our go client parody is a rust client ethium is the c++ client the rest aren't clients yet but EVM to wasm is part of the wasm research it's a program that takes EVM code translates it into wasm code and then I fed that to Google's v8 engine which generates assembly code EVM jit Pavel was just talking about it generates assembly code directly from EVM code and then the native c++ is I rewrote the benchmark programs in c++ instead of solidity or EVM assembly and compile those to assembly so pretty clearly the c++ wins the race and pretty clearly exponentiation is pretty hard for everybody which isn't surprising but it's a little concerning there might actually be possible exploits by writing contracts that do exponentiation and get charged only a few gas but take a whole lot of time and rc5 looks pretty hard because rc5 works on dynamic shifting and the EVM does not yet have a shift operator and so it gets imitated with exponentiation and in between things are relatively regular and the speed you could actually predict by the language that the client is written in where did that come from next slide I can't see the screen with these glasses on so to simplify it this is looking at one angle it's a harmonic mean of the performance of each client and it shows pretty much the same thing so clearly the interpreters are not as fast as going straight to machine code by any route and clearly some interpreters are better than others but they've all been good enough so far for our purposes and of course I love car races as examples last year somebody in the audience shouted out that instead of using classic cars burning tons of gasoline I should be using tesla and teslas are nice is george hallam here are you here george george would agree that rather than a tesla this would be much cooler this is a 68 mustang hatchback you know under the hood is actually some powerful electric motors and the trunk is full of lithium ion batteries and if this works where's the button we will see how this does against tesla is there any sound no such luck but there goes the Mustang tesla doesn't have a chance so it did just missing but after a quarter mile it had gotten to 140 miles an hour that's a lot so what what keeps those interpreters from reaching native speed and the first answer is they're interpreters so they've got that overhead you can work hard you can reduce the overhead but generally you can't do better than about three three or four to one compared to native code for our particular interpreter the 256 bit registers slow us down because real hardware has you know 32 64 bit registers and the unconstrained control flow hurts us a lot and I'll get to that the 256 hertz if you remember you know grade school math adding multiplying you know two numbers is pretty easy you can do it in your head if you have you know four numbers suddenly it gets a lot harder and it's quadratic so it gets worse and worse 256 squared is a lot control flow the jump operator in evm I mean you know go-tos are considered harmful but at least you know if you say go to label it will go to one in exactly one label in evm you say go to whatever's on the stack so there's no way of knowing often statically where it's going to go so you can have a nice little program like this and f calls g and h and returns and it calls i and returns etc nice clean structure no trouble no trouble to understand you know static analysis anything no trouble what's it actually look like to the evm that's what it looks like to the evm so if you're trying to do formal analysis if you're trying to write a compiler if you're trying to do anything with it again the number of paths goes up quadratically and you're in trouble because pretty much if you can't do it in linear time or at least analog and time on the blockchain you know at deployment time or at runtime you can't do it so how do we do better you know well evm jit is already doing better i won't back up but if you look at the slide you know the evm jit is actually pretty close to the native speed you know it does very well on the wider arithmetic and not so well on narrow arithmetic and complex logic you know but it's a very good jit you know i've told pavel he gets to be the electric fox he's tired of these little you know three and four letter names that don't mean anything this is a racing team out of latvia that's a completely electric dragster you know and here it is winning the european world record there it goes drag races are fast it's not impressive is it it's over hello 275 miles an hour just a few seconds love these things so we've got we've got two research programs who've that have been going um on how to improve things they've been nicknamed evm 1.5 and evm 2.0 which doesn't really mean anything except those that are nicknames and for 1.5 that's that's a suggestion to extend the current evm by adding adding new op codes and requirements so we forbid those unconstrained jumps you just we will not allow you to do that and we we then have to provide a way to do the things that you otherwise do with those jumps so there's op there's op codes for subroutines and then we've got to get away from having nothing but 256 bit registers so we've got op codes for native scalers and op codes for simd because real hardware has all this silicon devoted to simd registers and if you go to google and type simd crypto you get a lot of results so it would be useful to make that hardware available um and at deployment time there's a validation phase that goes through the code and make sure that it actually does follow the rules um what a concept um and then 2.0 well g it provides op codes for structure control flow um it's stricter than 1.5 it actually looks like a high level language with if else and such um and it provides op codes for native scalers it simd perfo the simd is coming later but there's a simd proposal and it also has a validation phase where it validates control flow and stack discipline and type safety um so they're they're very similar at that at that level um so we've got some technically very very similar proposals um they both provide uh for for very fast compilation to native code um it could be done as a jit yeah we've come to realize um ethereum cannot do jits there they are actually exploitable that is um if you find or write a contract um which takes a long time to compile with the jit but requires very little gas to run um and you start hammering those contracts you can you can do a really nice you know do s attack so if you're going to do any compiling you've got to do it upfront at deployment time um and martin's notion of transpilers um i think is very important here 1.5 could be transpiled to 2.0 um 2.0 can be transpiled to 1.5 1.5 or 2.0 can be transpiled to 1.0 either of them can be transferred to the jvm um you can make up new ones pretty much you know you can compile any vm you wanted to some other vm um and he also has a notion of gas injection so you put little pieces of code into the right places to count the gas um and what this means to me is it doesn't matter what execution engine a client chooses because on the blockchain you can put a contract that translates um into that execution engine so these are these can be completely independent choices and we could actually choose to support a number of vm's if we wanted to and independent parties could decide to support a different vm put a transpiler on the blockchain and away they go um gee i'm almost out of slides um and there's no sound that's really too bad um so what what is the big deal about native performance and what's the big deal about being lean and mean and close to the metal well we saw the electric dragsters here's like a real top fuel dragster they run on a mixture of diesel fuel and uh nitromethane just about a month and a half ago not too far from mings place in michigan this guy got the world record and it's over 338 miles per hour and a quarter mile these guys pull about five or six g's which is about the same as an astronaut taking off in the space shuttle um but there's a problem with going fully native in the sea world we call it undefined behavior no that's the last come here okay play play there it goes what's going on my screen we'll try it again to do the driver actually walked away unscratched i love this guy we'll give you one more and it turns out i'm done so i knew it audiences love explosions testing this is the evolving the evm panel hello find the seat panelists say everybody i'm kasey and i asked to moderate this panel because i've been working on evm and e wasm lately and i know there were there are no other experts besides these panelists so you saw martin's talk yesterday uh and well it was on primia but it mentioned the precursor to primia which is uh e wasm and that started in december 2015 um by uh the summer of 2016 the first commit was in april but by the summer there was a pretty working prototype for the evm to e wasm transpiler then so that was the evm 2.0 uh so after the 2.0 proposal came the 1.5 proposal from uh greg and pavel and you just saw um pavel's talk earlier about evm c which is sort of the the api to uh plug in and be able to swap between uh e wasm and and evm uh 1.5 and uh then came earlier this year uh was julia yesterday also we saw um alex alex's talk about julia um which he and christian designed uh to upgrade solidity and be able to make solidity target um the evm 2.0 the next versions of the evm 2.0 and 1.5 and then since then um also pavel has made a lot of progress on the the jit vm so it's sort of been backwards where first evm 2.0 was proposed then more progress on uh evm 1.5 i hope this makes some sense so i i think i'll uh ask our panelists um what is uh what is e wasm what is evm 1.5 i arrived i guess e wasm had made some progress substantial um and a few things struck me one is people were sort of excited because g wiz you could you could run you know c plus plus contracts and e wasm and as a c plus plus expert i said why on earth would you want to write contracts and c plus plus haven't haven't people lost enough money on the blockchain already and the other thing that struck me was why on earth would you want to hand over to an outside committee the definition of anything to do with the core consensus protocol and i looked at you know the current evm and said there's an awful lot of white space for more op codes and this thing's not broken it just it needs a little work um so i set out to say what what do we need to do to this to to bring it up to modern standards and make good use of modern hardware and uh so i got to work on that and i got a lot of help from other people on the c plus plus team the documentation would not have happened without christian and uh so the team put a fair amount of work into that we've got a couple of the ips martin has put a lot of work into the e wasm proposal so they're sitting there and we'll we'll need to make some choices and we might throw both of them away and say okay we've learned a lot what should we actually do we might choose one of them we might stick with what we have and ask and we advance our compiler technology um to compile what we have into code yeah that runs better and pa will can speak better to how possible that actually is yeah i would be very happy to have any of these but uh to add it more constrained to the control flow would help a lot there and i believe we can do much better in jit like evm's if you have that but you know just a step back so last year when you came around and even 1.5 came around it was still in a point of time where web assembly was not finalized at all right and there was no knowledge at least we didn't have any knowledge when it's going to be finalized there's no idea and since then this year the first version came out so that problem went away and you know like back a year ago we had no idea when it's going to be finalized and how it's going to look like when it's finalized and so that was a big concern and and i think because of that even 1.5 made a lot of sense to maybe bridge the gap or have like a you know backup plan assuming that evm 2.0 would be finished in time but we cannot release it because the the web assembly isn't finished and then if if you could finish even 1.5 quickly enough it could be a good bridge between the two um but that problem went away today uh well you know the earlier this year so how do you feel about the brokenness of evm since did um did anything change i actually think evm i think evm's gonna die why do you think that because there's been so many attempts to get a binary format running in browsers and they've all failed you're saying web assembly is going to die yes wasm wasm's going to die whatever i said i don't know that's a bold prediction but i think we can step back and um say what are the problems with the current evm 1.0 i mean one problem is that um the gas limit in each block is not enough to do everything people would like to deploy contracts to do because uh one example is the uh blake 2b precompile it's a proposal for these native precompiles um and precompiles solve the problem of contracts that people would like to deploy but they might take you know 100 million gas when the current block limit is the current block gas limit is six million gas um so how does uh evm 1.5 or or or web assembly solve this problem uh well web assembly allows you would allow you to can just uh compile code directly to web assemblies you you most likely don't need native uh native contracts because you would just write them in web assembly partly gas but it's partly just that it's too slow you know i mean pavel was working i remember on one of the precompiles that you were like pushing all the compiler flags as far as you could and trolling through multi precision libraries to find one that was mostly in hand coded assembly you know we just can't get enough speed out of the vm for these precompiles but and then how is 1.5 and and 2.0 how are those are you coming faster than than 1.0 like my little grade school example you do a multiply and you're doing one instruction on a 64 bit pair of registers as opposed to you know long division or long multiplication on a you know collection of registers i think uh the the current issue we have is uh quite big difference in terms of speed comparing to native code so we we cannot actually effectively uh encode the algorithm we want there for example some hash functions and um and that makes them if you like to implement them in pure smart contracts that make them quite expensive and you have to pay for that you have to pay a lot for that because it's just the current evm is not capable of express enough uh to have comparable speed uh comparing to native code and i believe that's what for example web assembly gives like it's it's at least comparable like if you implement the same hash function uh in in c and web assembly at least you have comparable for them and and not having like times 10 10 10 to 100 times slower performance but actually if you if you take a step back to like the first version of evm and look at why do we have or why did we had an identity contract just for achieve memory copying that maybe shows that we didn't had everything thought through properly and we started to introduce uh these pre-compiles especially the one for copying memory that means loading and storing memory was too expensive yet we still wanted to do it so we introduced the pre-compile and and that are bigger issue which greg mentioned several times already is the bit width everything is 256 bit and there was a proposal even before the 1.5 to have 64 bit arithmetics in evm and you have folded that into the same d proposal as it's still there um but if if we if we just look at these two problems that it's quite wide for arithmetics and we had we started to introduce all these pre-compiles just to get around that that probably shows that we didn't figure out the prices properly and with web assembly as pavel said it's resembles the instructions much more closer to traditional computers so there's a much probably much easier way to figure out what the real cost for those instructions are and and by figuring out the real cost um we can probably avoid having pre-compiles that would be nice it's worse than 10 to 100 but my graph was scaled by square root and the the slowest exponential operation compared to the fastest native code was 10,000 to 1 but that was on evm to wasm right actually it was actually go versus c plus plus compiled straight to assembly so well another advantage of uh another motivation for the uh e-wasm proposal was be able to write contracts in other languages that target web assembly as a as a compilation target versus only targeting evm so we have the the evm to e-wasm transpiler as a prototype we don't yet have the e-wasm to evm transpiler so the reverse um which would make the evm 1.5 proposal equivalent to almost the e-wasm proposal um because then you could still write languages write the contracts in languages that target web assembly transpiles to evm 1.5 I've asked before how we what would it take to write the evm the e-wasm to evm 1.5 transpiler and correct said it would be easy martin said good luck um yeah good good luck you told me would be easy it's not impossible I would never say it's impossible you said it would be easy you told me would be easy I I don't know it's gonna be some work it was on a fire escape in Berlin I mean was I sober I don't know and and also um one of the motivations for just skipping evm 1.5 and just and because the original proposal was just going straight to 2.0 the evm 1.5 proposal came later and one of those reasons was because well it must be hard to write a jit a jit compiled evm and then uh and then you know pavel wrote a prototype of a uh evm uh jit vm is what was that easy pavel so actually the prototype of that uh it's still a prototype uh it was it was it was done uh even before the launch of of ethereum it was one of the uh performance benchmark project that we want to to have to actually assess what can we do in the future in terms of smart contract performance uh but yeah it's it's it's still struggles with some some cases and uh as I said we can do much better in this case uh but and the required step is it's at least um this control flow restrictions and sub routine support uh directly in the in the evm bytecode um that would allow even more optimizations and but on the other hand like jit compiles are hard I mean uh the the the network consensus depend on that and like the risk is like we might might get hard or it might be never finished in terms of removing bugs and finding uh edge cases uh because that's that's much more complex construct uh comparing to to interpreter I mean another problem with with just in time compilers is that uh I'm not sure if there is a just in time compiler that provides a a fixed upper bound in terms of resource consumptions and this is a very important guarantee that we need in order to do to do the the gas calculations properly right I mean usually just in time compilers generate code that is faster and takes less resources but we don't have a guarantee or do we well as I said we can't do a jit it's it's exploitable we have to it has to be a compiler that runs at deployment time not at runtime um which means it can be a full compiler depending on how long we want to take to load a block then my understanding is storage is the main constraint there anyway but I mean I remember in my testing I came up with one performance bottleneck and you said it's not a priority I don't think I can get to it and the next day you had it fixed um yeah this is a bit a bit different issue so um in in in Ethereum we actually care about the worst worst cases because this is what what the cost must be for and uh and so this affects um more complex optimizations but also the big big integer libraries that actually try to squeeze the easy cases first uh but this is not what uh what we point to like we don't care if we can divide quickly for small numbers because what we care is to have the the worst case uh covered so yeah that would be much more uh much more difficult to control that within the jit because it's like you have the big at least in the in the in the in the in the evmj that depends on llvm you have a big uh back end library that that's that for you and it's really hard to tell what it's actually doing uh but yeah I guess there are there are different approaches to that and for example Iwazem has some jit prototypes that are not depending on on on llvm of course one of my bigger concerns is not technical either of these programs technically does the job I'm I'm much more concerned about who controls the specification and I I I really believe that the ethereum community should completely control that specification that the web browser space is not the ethereum space and I would not want to get wedged with the with the wasm group moving in a direction they need to move uh and us shaking our heads and going no we don't want to go in that direction yeah I disagree with that um because I think I much rather uh it's much more pluralistic to go with uh a larger community um a larger body of people standardizing and coming to consensus on a virtual machine instead of just using a virtual machine that can only be or was created to only be used in one pacific use case and if you look at the browser use case is very similar to the blockchain use case um we need secure portable uh and size efficient uh bytecode right is exact same concern we have in the blockchain space um and and furthermore like it's an open it's a it's open to participation you can go to the web assembly uh community meetings you can um voice your opinions you can uh submit proposals on github uh it's very open and and you know it's easy it's easy to get involved um that and within the blockchain space I know of at least three other blockchain projects outside of ethereum that are already prototyping and actively have uh wasm running uh in in their systems so we're seeing a lot of momentum I think pick up around it uh and I think that's sort of just gonna be the way it is because it's like once there is we start to have consensus around the VM everyone's just it's gonna be the obvious choice everyone's gonna use it and it's gonna be a recursive feedback loop right it's gonna be a feedback loop where uh since there's more people using it we're gonna get better tooling for it faster we're gonna get better implementations um etc but it's not like um hypothetically if you introduce cvm 2.0 it's not like that a wasm would introduce a new update and that magically would work on ethereum as you mentioned in your talk both 1.5 and 2.0 have a verification code which has to run prior deploying the contract and that verification code verifies the uh you know according to the wasm specification which is the current list of opcodes in web assembly and so if they introduce new opcodes they wouldn't work without us making this decision that we want to support them and so I don't really see that as a risk that we would be exposed to random new instructions being uh supported by ethereum without our uh review first now I'm more concerned so we come along and there's an issue with wasm I mean bugs show up in specs or if not a bug an ambiguity and we resolve it one way and it works for us and we can't wait you know we if something's a little weird in a browser it's not a big deal and the committee will get around to it but if there's anything that breaks consensus we have to fix it immediately and then eventually it gets around to the committee and they go no we don't like the way you fixed it we're going to do a different fix uh and I just feel like over time we will wind up forking away from that standard um and we will lose these benefits of shared design and tooling anyway in fact we're a fork to begin with we're a subset so I I mean I spent a couple weeks trying to get c++ into wasm I succeeded but it wasn't easy and then I think that's misleading it's a subset but the only subset being with the we don't let floating point operators yeah that's the only subset but you have to convince the compiler not to do that yeah that should be pretty easy I mean if you don't use floating point inside the compiler can decide that it's going to use the floating point unit just because it feels like it unfortunately we're out of time so we'll have to continue good floating points what about Julia sorry what about Julia Julia I think uh christian you're going to cover julia a little bit in the uh flexibility uh solidity talk uh not too much I mean Alex talked about it in in quite big details so yeah well it's a very active debate uh even one dot five verse e wasm hopefully we can do the magic of uh transpiration come to a united front and move forward with the evolution of the with evolving the EVM thank you what do we do with her okay hello um after the next presentation we're going to nice music um after the next presentation we're going to break for lunch we'll reconvene back in here at one o'clock um next up we have martin swende and he'll be talking about ethereum security my name is martin holst swende and i am the security lead for the ethereum foundation today i'm going to talk a bit about uh evm forensics and uh managing attacks against the network and how we've been working on that so i've been uh security lead for one year i started just before uh devcon 2 in Shanghai last year uh which started off with the Shanghai attacks roughly one day after i started my new role it kept on for a month we've also seen during the last year we have done three hard forks we've had one unintentional consensus split there was a dose attack against specifically the death client um there has been thousands of ether stolen in more or less more and less sophisticated attacks both on chain and off chain um we have the the test net totally brought to its knees and then resurrected again and of course there's the standard it incidents with leak databases and someone taking over someone's phone number and their account and attacking our github and stuff like that so we should all be very clear about where we're at this is crypto land and we're all in crypto land and it's like uh australia where anything with a heartbeat will try to kill you and if you make a mistake um you're probably dead so meanwhile for attackers they have never had it better they no longer need to hack point-of-sales computers and trade carding details over shady forums they can just hack a computer and or somehow get some cryptocurrency and immediately turned into value and and it's so it's like wild west in australia right now these are the shanghai attacks i'm not going to talk that much about them the first shanghai attack is a little blip down there and then it just kept on going for a month and it was a lot of different attacks mostly targeted towards gap but when the dust is settled after incidents happen uh then that's when you can actually do something about them and think about how can we be better prepared next time something similar happens and how can we prevent it so how can we improve the readiness and the resiliency so for readiness it's about detecting attacks and performing analysis quickly so we started improving that with some monitoring adding up some monitoring nodes that were running the cloud and adding some graphs turns out there was some inherent issues which hadn't been noticed before with transaction propagation inefficiencies which over the course of a few months in the beginning of uh from January 3 to March we managed to bring down the overall network traffic with about an order of magnitude just by removing um invalid transaction propagation from the clients uh on these monitoring nodes we also added some interface so that we can extract very detailed information about what are the canonical blocks in the chain and if we see a consensus split we can get very detailed information about the receipts and differences in these and quickly point out which transaction caused this consensus issue so here you see uh there's a geth master and geth develop and the parity node and right in this image they're differing on two fields marked in red there and that's because um parity rpc interface exposes a few different fields than geth now as we're going to analysis i'm going to talk a few words about the evm because there's uh there might be a conception that any minor difference in the implementation of the evm will automatically result in a consensus failure and that's not quite true because there are some things some parts of the evm which are ephemeral such as the memory on the stack uh and which do not necessarily trigger consensus issues but they're very interesting because they can be used to trigger consensus errors and in order to really uh measure evm side by side and detect implementation differences in evm's we need an kind of up by up view of the internal state so we push kind of hard to get the common output format for evm's so that after each instruction made in evm it would output a json blob within internal state as you can see on the left and also a capability to use arbitrary pre-state and genesis configuration with the raw evm's so one problem that can arise is if we hit by an attack which blows the node out of the water how can we analyze that uh because our node just died right how can i analyze the transaction if the transaction crashes my node well if we have a standalone evm what we can do is we can just fetch the pre-state about the sender and the receiver is just those two accounts and we can execute that locally in our evm and then analyze the trace to find did we miss anything was there anything else we should have had here and external references and fetch those and start over and if the node crashes then we successfully reproduce the ap the transaction and for all for this we only need a web 3 standard api without any debug uh specialities um sorry so i'm going to demonstrate uh quickly how we can do analysis of the jump test attack which we were hit by in on june one so i'm running this little reproducer here um i pipe in the hash used in the attack the transaction telling it i'm going to use my local evm not through docker and it basically sets the right fork rules for that particular block and executes it and it has some intermediary traces here uh we can take a look at those so we'll let's go directly instead for the final trace i'm showing this in in what i call the op viewer or retro mix if you like it's a remix like uh debug viewer for the json output format that i showed earlier um and this is a good start for analyzing what's happening in the transaction so you can see this particular transaction it does an xcode copy um and the xcode copy fills the memory with 5b and it does it repeatedly and as you can see the uh memory is growing and it keeps doing this for about 600 steps i'm going to go a bit faster here until it has filled up the memory with half a megabytes all 5b which happens to be jump test then it puts some more code in there and this is looks like actual evm code 60356 5b and all of you i'm sure recognize that that is the push one jump jump test and stop so it it just executed a create with that code and as you can see the size of the create is the full half megabytes okay so now we know that the attacker is doing creates and he keeps doing it repeatedly one part of the memory changes between each invocation uh it's a little counter down there and i'll skip forward a bit so it's all just create and it ends on create number 105 it goes out of gas so by this time uh you can be kind of uh have an idea so it's it's doing creates lots of times with the large memory segment totally filled with jump tests it changes one little bytes each time so obviously bypassing any caching mechanisms so by reproducing it and and viewing at the trace in this fashion we can do a very quick analysis of what happened and we can uh benchmark it right now it's running at 300 milliseconds and if i compare that to so this is the evm uh get evm with the patch applied after this attack i can try it against the evm without the patch for the jump test analysis and as you can see it took nine seconds so this tooling makes it possible for us to do quick analysis and then to check does this patch work and i can i can share it with the co-workers and they can try out various patches and see which one is the best i can also run this in a web-like format and do the all the same things and investigate other on-chain events for example the uh sorry which one did it take that was the same the pirate evolved attack and there we have the pirate evolved attack reproduced and you can run it locally or you can check and annotate the trace of what happened there in the pirate evolved attack and for example yeah so here's the fated infamous delegate call and the uh null attack if you want to if you want to analyze that more in depth so the evm lab which i showed you a part of uh makes it possible to do some evm assembly pythonically and investigate these kinds of issues and yeah dissect attacks on a really low level we had two hard folks also and in preparation of those we ramped up the testing introducing parameterized tests or generalized tests which dmitri talked about yesterday in the breakout room and also put it all into hive uh hive is pitted silagis super cool framework for running nodes in in a black box fashion and just synthesized the uh the environment the genesis and the blocks and everything and then you can compare the the expected post state after a sequence of blocks uh and this makes it possible to run it runs about 24 000 test cases against pytherian parity geth and cpp and it runs at 24 7 uh 365 uh it removes the dependency of the developers to perform tests as part of the test process so now testing can be a totally separate process which doesn't really rely on the developers per se the fallout however after the first hard sorry second hard fork is that we had a consensus issue which was um yeah definitely not what we wanted manually crafted tests are great but there's no way to scale it uh due to the inherent complexity of the evm we can't just have enough people know that much about it to to be able to scale it up so we wanted more coverage for Byzantium and started looking at the fussing one way of doing that would be to generate test cases randomly execute them on each evm and use this shared output format to compare the internal state after each operation and just repeat it and this can be done fairly quickly can do a couple of million tests per day if you draw binaries and you can use uh these four clients the second track is based on libfuzzer where we got in touch with the Guido Rankin who has done a lot of fussing and is a real expert uh on libfuzzer so libfuzzer is the core of american fuzzalop it's uh fuzzer developer michael salewski and it's a bit more sophisticated because it uses code paths and instrumented binaries to detect code paths for any given input and then mutating those inputs to maximize the code coverage and since everything is instrumented and compiled into one big binary uh it's an order of magnitude faster to perform these uh tests so it can do about 100 million tests per day and there was a spectacular and kind of unexpected success in this uh we've had seven or eight consensus issues found most of them before the hard fork uh one of them slightly after the hard fork it has been fixed and patched and released and the clients today i would say are more thoroughly tested than they have ever been in the history of ethereum and we are still running fuzzers uh 2047 and it's been millions of of tests done on the test earth and billions of tests based on libfuzzer naturally they can still be consensus issues or denial of service issues and if that's a really really uh concern of you of yours then you should run multiple clients and try to detect uh mismatches and uh you can use the debug method in gith to find out if gith has tagged one of parity's canonical blocks as bad and key takeaway here is that all everyone here are targets for attacker it's involved because it's important so be paranoid and be proactive and work on improving the security and your resilience and how you can handle attacks that's about it for me thank you