 Hello everyone. So today I'm going to be announcing and releasing a new project we have at IC3. IC3 is sort of a cross-university initiative for experimental blockchain research and systems. This is joint work with Florian Tremer from Stanford and Laurence Breidenbach from ETH Zurich, Cornell Tech and IC3, as well as myself and my advisor Ari Jules, also from Cornell Tech and IC3. All right, so just a brief reminder. Last year I gave a talk in DevCon on smart contract security and we talked about how we sort of view this problem at IC3 as having these three different prongs of formal verification and specification. So how do we know what we are building and how can we check it? Escape hatches. How do we recover when something goes badly wrong with the contract? And bug bounties. So how can we address these perverse incentives where people in the ecosystem are incentivized to attack and steal money rather than honestly report vulnerabilities? So for some general security background, I highly recommend checking that talk out. All right, so there's some new promise in the space in the last few years. You can see sort of this Google Trends graph and this is for this new concept called crypto economics. So what is crypto economics? Crypto economics is sort of the idea of combining cryptography to look at the past and economics to provide security guarantees that hold into the future to secure these protocols and contracts that we're all building here. There's even been a crypto economic security conference this year at Berkeley for the first time and by the way blockchain at Berkeley we totally agree that the blockchain space needs more academic presence. All right, so is this planet hype or is this the real deal? Let's take a look. So where is crypto economics kind of being explored right now? We have the original sort of incarnation of this in the Bitcoin system and the Nakamoto white paper. And we have several new protocols that are coming out among which relevant to the Ethereum community is Casper and the set of protocols that encompasses that are also using the style of analysis. All right, so where don't we have crypto economics? Unfortunately, there is not much crypto economics going on at the smart contract layer. So we really don't have a rigorous economic analysis of most of these contracts or cryptographic mechanisms to sort of guarantee the properties we're looking for. And that is why today we're announcing the Hydra project. You can check out the hydra.io and sort of follow along and read the paper or look at the code. Highly recommend you do so if you have a laptop and the Hydra project is a smart contract development framework that gives you decentralized security and bug bounties with no humans in the loop and no trust required for payment. Rigorous crypto economic security guarantees that we can model and analyze and a strategy for mitigating both programmer and compiler error. A little bit of a warning here. The Hydra project is still in sort of the research phase. We're still undergoing testing process, security audits and other sort of high assurance software development processes. So we have released all the code for you guys to play with and start experimenting with and give us feedback on. But trusting large amounts of funds to this little baby Hydra that we've sort of spun up would probably be a mistake. All right, so why bounties? Traditionally, you sort of have this limited time for software development. You have two to four people take a look at it, do a code review, and then you ship it to the customer. This new crowd security approach says basically for an unlimited time you run a little bounty and you let the community look at it with the idea being that many eyes are better than one. All right, so let's see this as a game since we're talking about crypto economics and take a look at what a rational attacker would do with these bounty systems. All right, so here is a system with no bounty. An attacker finds an exploit and then they have two choices. They can either attack and steal the A dollars in the contract or they can disclose and get nothing. So obviously they're going to attack the contract every time. This is why we need bounties. All right, how about traditional bounties like the kind I showed you on the previous slide? Well, there's a problem here, which is that an attacker doesn't actually know when they disclose how much money they're going to get paid. So they actually have no way to make a rational decision on whether to disclose or to attack the contract. All right, what would be a better game? What would we sort of want out of this functionality? Well, one thing we definitely want is we want the attacker to know sort of what the value of the bounty is, so they can at least make somewhat of a rational decision and attack only if the contract value is greater than the bounty. Unfortunately, this means you need to have very large bounties and essentially pay more than what the contract is holding, which doesn't really make sense. So what would we really like to see economically? We'd like to see sort of this game where an attacker, once they find an exploit, either has the choice to disclose it for some amount or to continue trying to attack, which will incur to them some cost, to eventually try to steal all of the money in the contract. So in this game, an attacker is going to attack if sort of the cost they have to do to fully exploit the contract, the contract value minus that is greater than the bounty. So what do we want to do here? We want to maximize this C if we can create such a game as high as possible and sort of get the best possible economic security for our contract. This C cost after finding the first exploit to a contract is what we call an exploit gap. So we say mine the gap and introduce an exploit gap. What are other things we'd like out of our bounty? We'd like the attacker to be able to either attack our contract or claim the bounty, but not both. We would like the payout to be predetermined so they can rationally make a decision on whether to take the bounty and we would like the payout to be fully trustless. It's full of stories on the internet, several of which we cited in our paper about hackers who complain that X company didn't pay them or they didn't pay them enough or they didn't agree with their assessment of the bug. So we want to get rid of that completely by just eliminating humans from the loop and allowing the attackers to see exactly what they get. All right, so how do we do this? There are many, many ways to accomplish an exploit gap, several of which we mentioned in our paper, but the one we're going to look at today is this thing we call a Hydra program, which sort of takes this classical idea of N version programming and tweaks it a little bit, so that rather than focusing on availability as you would like in an airplane system or a space shuttle or something like that, we focus on integrity. So what you do is you develop multiple copies of your contract independently and then if any of them has an error, you can notice the divergence in the output of those contracts and then use sort of a governance contract to abort your contracts and pay a bounty and do some safe recovery procedure where either the money is returned or it's sent to a trusted multi-sig, etc. All right, so this sounds great, but I heard that N version programming sucks and it's like not fully independent, so what are you talking about? So let's look at when we actually get this exploit gap out of our Hydra contract. Let's say one of the versions you do has a bug. Of course, then you get the exploit gap because it's very easy for the governor to sort of notice this. What if all of the versions you have have a bug? Well, you might still be okay as long as it's not an identical bug, our governance contract can sort of notice the divergence between these contracts and pay the bounty. Sometimes though, we have no gap and this is a little bit unavoidable. So if all of your heads, all of your N heads have the same exact bug, then unfortunately you're sort of out of luck and the attacker can fully exploit the contract and break the game. This might happen if you have an incredibly common mistake, a universal compiler or EVM bug or a bug in your specification that sort of all your contracts implemented. So it's not a perfect system, but still we say let's bring back the 80s and maybe we can do something cool with the system. All right, let's take a look at some of the highest value losses in the Ethereum ecosystem over the last few years. The parity multi-sig, the DAO, smart billions, et cetera. So here we've sort of highlighted in green the exploits, which we believe that this kind of methodology could address very easily in red, which you can't tell is red, sort of this bottom line, the ones that we can't address, because for reasons described in the paper, we don't handle gas issues, as well as ones that maybe could have been addressed, but it's a little bit more complicated than just taking a look at it and saying, yeah, definitely, this is okay. So what I'm going to say is five and a half out of seven is not bad. And it's a great start towards achieving the sort of rigorous security that our ecosystem demands. All right, so a warning, we got some math coming up. So if you hate math, you might want to tune out for about the next three minutes. So this is sort of the crypto economic analysis I was telling you guys before. So we want to ask sort of the question, when is honest behavior incentivized? So honest behavior is incentivized when the payout of the adversary for following the honest strategy, that's this payout of H, over the payout of the adversary for following the malicious strategy is greater than one, indicating that the honest strategy is expected to pay the adversary more than the malicious strategy. So to sort of arrive at our analysis, we model bug finding as a Poisson process. So these T variables over here, sort of our random variables modeling the time either an honest player or a malicious player is going to find an exploit. And this gap variable here is sort of the size of the exploit gap we managed to achieve with our contract. What does that look like? Well, we do this analysis that involves defining an ideal functionality, which you never actually have to define as a user of the hydro framework, but it sort of represents in our analysis the functionality the developers had in their mind when they created it. And if you have this P, sorry, if you have the number of total heads, the space of all exploits, and you sort of randomly sample a bug and you can get this P, which is the probability that that will apply to one of the heads, then you can rigorously quantify and even graph your exploit gap, which is nice. The nice thing to notice here is that this exploit gap is exponential in N, which is the number of heads that you do in your contract. So even if you don't have perfect independence among your heads, even if bugs are somewhat correlated or likely to affect multiple heads in your system, you still might create a large enough gap and enough independence to have this sort of governance contract detect the error. So that's a really nice result and it kind of differs from classical N version programming where you do something like take the average of a bunch of different programs and then one program being off could badly damage your input and sort of lose you all assurance. All right, so let's get back to earth. If you tuned out for the math, you can come back now. So what are we doing? So we're launching an ERC-20 bounty. Details about that are available at our website, thehydra.io. We have it funded with a thousand dollars today. So a thousand dollars is the reward for sort of breaking the full contract and achieving that full dollars A exploit I talked about in the first slide. If you only manage to break one of our heads and our governance contract manages to catch your issue, then you get paid one tenth of that, which today is a hundred dollars. Why so low? Well, this is a research prototype and we're still sort of undergoing testing and we expect to rapidly iterate this. Our hope is within the next six months or so to ramp up to a fifty thousand dollar bounty on the main net. The contract that we have deployed right now is just an ERC-20. That's one to one peg to ether and it's running two solidity implementations, one viper implementation and one serpent implementation. Yes, serpent and if you can manage to trigger the bounty with a serpent compiler bug or break the whole contract, I'd sort of like to see that happen on chain. So that should be a little bit of fun. We also believe we've written the most thorough ERC-20 test suite that's currently available open source and we encourage you to check that out on our website. And if you can propose additional tests that we didn't cover, that would also be wonderful. And the ERC-20 viper contract that we wrote has actually been merged and integrated into viper as sort of a standard example of how to write an ERC-20. So the project has already sort of produced some real world artifacts. We also have a Monty Hall game, a generalized Monty Hall, where you can sort of bet on the outcome of a Monty Hall. It's in testing still. We don't know how large of a bounty we're going to launch it with. We wrote this contract because we expected it to be challenging. And there is code and specification for that on the website and we expect it to be live soon. All right, I didn't do a demo because I didn't want the risk of things blowing up on me and we only have so much time. So I'm going to sort of take you through a picture demo of what it looks like when you're interacting with these contracts. It's very easy. The address that they're deployed at in the ABI are all on our website, so all you really have to do is plug them into any Ethereum client or any Ethereum tools and you should be able to interact with the contract like it was any other ERC-20. So here it is in my Ether wallet. You can see there's like a few functions that you can access there, some of which are not in the original ERC-20 contracts, but were added for the Hydra. One example is this bounty claimed. So the bounty claim flag you can look at for any token contract or other Hydra contract and it will tell you whether someone has already sort of claimed the bounty by causing that divergence that we're talking about. So here it is true. All right, and this is what sort of a history of transactions looks like. You can see the contract creation there. Someone deposited some Ether into it and then they found an exploit and that top one is actually what a bounty claim transaction looks like. So here it is blown up on Ether scan. I don't know if you can see this. The font is a little small, but what it's done is it's transferred one tenth of the Ether in the contract to the person who found the bounty. So this is sort of that one tenth of that three ETH we mentioned earlier and it's transferred the rest sort of back to this trusted multi-sake, which we're currently just using the creator contract. All right. And so how do you actually develop a Hydra contract? We have this sort of Hydra process that we're proposing, which has three steps. The first step is specification. So this is something you always want to do for your smart contracts anyway. Please start doing this guys, which is to write an independent specification of what functionality you're trying to achieve in enough detail so that a programmer with no communication with you could sort of sit down and implement that correctly. The second step after that is implementation. So what that looks like in a Hydra is you ship off that specification to a bunch of different developers and they sort of implement your contract. It's worth noting that the Hydra is sort of flexible in how you use it. You don't necessarily need multiple developers. You can also just compile with multiple compilers. If all you want is a bounty on a compiler errors or you can deploy it sort of alongside your original contract rather than using it fully in production to just get a separate decentralized trustless bounty. So while we were developing our ERC-20 with this process, this process helped us find integer overflow bugs, ERC-20 non-conformance errors that is developers incorrectly implementing the specification. And the most surprising one is sort of this suicide issue that ended up winning the underhanded Solidity Contest or placing in the underhanded Solidity Contest whereby you sort of suicide send money to a contract and corrupt its balance without allowing it to run its fallback function and react to this send. So this suicide issue occurred in our contract because we were writing a one-to-one Etherpeg. So the total supply function on three of our contracts would use a UN256 to keep track of how much Ether had been deposited and just return that. But on our last contract, the total supply was actually just returning the stop balance. So if you suicide sent to that one head that was returning the stop balance and then ran total supply, you'd be able to claim the bounty. And you could do that either as a contract transaction or as two separate transactions on the chain. And I think that's really cool because we actually were not consciously aware of this anti-pattern while we were developing the contracts, and it's sort of something we realized and tested after the fact. The other nice thing is all of the issues we found, which we found about, I would say, four or five during the process of our development. We're all claimable bounties and none of them were full exploits against all four of our contracts. So that sort of, at least informally, empirically validates the approach a little bit. You might be thinking, well, this is horribly expensive. I'm going to have to be paying a ton of gas to do this. I don't want to pay five times the gas. I mean, it's certainly true. You're definitely paying more gas with a Hydra contract than with a regular contract. The nice thing is that it sort of scales a little bit better than you'd expect. So these dotted lines are sort of lines with a slope of one, which is how you'd expect a Hydra to scale. You add another head, you sort of add one to the constant factor of your costs. The nice thing is that we actually scale below that slope of one because of some constant overhead that's associated with EVM transaction costs. And we can scale pretty significantly below it depending on what type of contract you have. So the more you sort of alter storage, the closer to this slope of one you get because storage alteration is the most expensive operation our contracts are performing here. All right, another thing we would like to announce soon is integration with Hectis contract. Hectis contract is a platform that we developed a few months ago in the bootcamp, the IC3 bootcamp, which has a bunch of examples of intentionally vulnerable and insecure solidity contracts. So you can go there, you put in your account for the testnet we're using Rinkabee right now. It'll deploy you vulnerable contracts and then it's your job to sort of hack them and have the site tell you you did it right. So ideally we'd like to sort of make this a hub for people trying to hack contracts and we'd like to have sort of these easy challenges which are obviously vulnerable as well as these harder challenges that we think that are secure and cannot be broken and do pay out real ETH in the case you do break them. So that's a future direction we're working as well. All right, what about formal verification? I've been hearing a lot about this formal verification thing. Doesn't it make this whole Hydra obsolete? Not really. So it is my belief that programmers will always sort of develop past what the formal verification allows them to prove. And many lower assurance contracts won't want to put in the cost to do full formal verification which can be expensive. So this is sort of a lower bar approach. You only need to develop your contract three, four, or five times, whatever. Most of these contracts are like two, three hundred lines so this is not really a substantial expense that we're talking about. Especially when you consider that they're holding in cases hundreds of millions or even billions of dollars in user funds. So we see this as sort of a stop gap approach to formal verification with the nice property that it encourages you and sort of requires you to write a detailed specification which you can then use to formally verify your contract. All right, the last thing that we handle, front running. So we have this problem. You found a nice juicy bounty. It's worth five million dollars. You're excited to claim it and become the next ETH millionaire. So you submit it to the chain and then someone else sees it or some script sees it or some miner sees it and sort of rebroadcast the same transaction with a higher gas cost getting mined before yours. Suddenly they've got the five million and you've got nothing. So this sounds pretty bad and it actually gets worse. If you allow this issue to persist, it breaks our whole economic game because what an adversary, a powerful adversary can do that has a good network connection is sort of find an exploit and then sit on it until they see someone else has also found an exploit and then release it. So they can sort of keep trying to break the full contract and withhold their exploit from the bounty process which is obviously something that we'd like to prevent. So we introduced this new scheme. We have a blog post on it. It's called Submarine Commitments. It's used in the Hydra project not in the current alpha prototype which is only intended for small amounts but we do have code for it. And we're sort of waiting on EIP86 to be released which is what we need to deploy this. And after that we will have Submarine Commitment enabled bounties. And the nice thing is this can even be used outside of bounties. All right. So the research contributions here. We have the first rigorous crypto economic reasoning on what used to be ad hoc security and bug bounties including how to price them. We have a model for how to price these things on our paper. We have an end head approach for achieving this exploit gap concept that we've introduced. We've introduced software and bounty code development. So put the bounty in the software development process as a first class citizen. We've sort of revived this old and sort of stale and dead end head approach for correctness. And we've developed what we believe will become the most secure contract eventually on the Ethereum mainnet because it's the only one that really has these rigorous economic guarantees of security. And a really nice thing I'd like to point out here is that this kind of bounty, this kind of decentralized process is not possible outside of smart contracts. So this is something new enabled by these systems. All right, what do we want from you guys? I have 20 seconds left. Hack the contract, join the riot, claim some bounties, it's alpha, it should be easy money. If you wanna bespoke bounty for your token or whatever, feel free to get in touch. Feedback on paper code is always appreciated. And eventually if the community or some projects in the community that are doing really well and use the ERC20 code and contribute to these bounties, that would be incredible. So yeah, please give special thanks to the NSF for funding this project, both through grants and through a graduate research fellowship for me. Thanks to our reviewers, Rahul and Paul for reviewing this paper. And thank you to our IC3 industry sponsors at Chain, Microsoft, Intel, Fidelity, Digital Asset, IBM, and Shapy Morgan for sponsoring this work. And thank you all of you for listening. Here's the official sponsorship and disclaimer slide. And that's it. Thank you.