 I hope everyone is doing good. Today we're going to talk about writing, secure, writing, secure, and correct smart contracts. So in particular, I'm going to talk about some directions and some problems we've been thinking about in academia. And I'll also try to tie it into an industry perspective for those of you who are smart contract practitioners. All right, so let's start out with who is IC3. IC3 is sort of a research hub centered around the universities you see there, mostly Cornell University, also UIC Berkeley, Cornell Tech, and the Technion. We've got 12 faculty at the last count, and we're adding more all the time. In the last month, we've added at least two or three faculty. And we've got many, many students at all levels of education. So before I start, I'd like to give a special thanks to the Ethereum Foundation for inviting us here and putting in all the work to bring us together. And also to our industry partners at Chain Digital Asset and IBM, thank you guys very much for your continued support of smart contract research. We'll also be announcing more partners in the coming weeks. And we'd love to have you guys as a partner. I know there are a lot of companies here who are interested in the correctness and performance of cryptocurrencies, blockchains, and smart contracts. So if that's you, please feel free to check out our website at initc3.org and contact us. All right, so here is the work that IC3 is doing. As far as we see it, there are five grand challenges in smart contract performance. Sorry, smart contracts and cryptocurrencies. One of them is scaling and performance. The second one is correctness. Third one is confidentiality. The fourth one is authenticated data, so getting data sources into these systems. And the last one is safety and compliance, which you've heard a little bit about today. So this talk, I'm gonna be focusing on correctness and its relationship to security and how we write secure and correct smart contracts. So this is intended to be a high level overview. We're not trying to be comprehensive here. There's a lot of work we're not gonna mention. We are also gonna provide some suggestions for practitioners, since there are many of you guys in the room writing smart contracts. And I'm gonna draw frequent parallels to safety critical software, whether they be in planes, cars, power grids, or other infrastructure systems and software like that. All right, so we all know this problem pretty well by now as a community. The problem is you have this software bug. It leads to money being lost or stolen, and then people who write and participate in the smart contracts cry. So there are some things about this challenge that are definitely unique to the cryptocurrency and smart contract ecosystem. One of those is that security is more closely tied to code correctness than anywhere else. So you have this sort of problem that any issue in correctness, which leads to a fault or software bug, may be directly monetizable by an attacker. And these attackers can also anonymously monetize these exploits untraceably. We've also got a unique adversarial environment here. So everything that runs on these systems from the miners to the node network to the actors that participate in the contracts could potentially be adversarial. And if it's not bad enough already, we're looking at public code. So in your car or in an airplane, you don't have access to the code that's running to try to test it for exploits, but on smart contracts, all code is public on the blockchain. All right, so how do we start tackling this pretty difficult problem? As far as we see it, there are three prongs here that will help us sort of stick a fork in this problem of smart contract correctness and security. The first one is formal verification and specification. The second one is escape hatches or otherwise known as kill switches. And the last one is bug bounties. So getting right into it, we're gonna talk about formal verification here. So formal verification is probably the hottest topic of these three in the community right now. There are several talks going on about it at this event and it's a really exciting space to be in. So what do we like about formal verification? Well, first of all, I think specification is a great virtue. So by rigorously and mathematically specifying what you're building, it helps you understand the systems that you're making on a deeper level. Also specifying the lower levels this rigorously, things like the EVM and the languages that we're using helps authors of higher level systems know exactly what guarantees they can expect from these systems. So specifications are great and the more rigorous these specifications are, the better we can start to tackle the problem of writing correct contracts. English specifications are pretty common in the field. These are definitely not enough. They admit ambiguity. You want an example of this? Check out the ISO C11 standard, which has a whole committee whose job it is to argue about whether certain programs do or do not conform to the standard. Formal specifications can also go further in serving as criteria for forks. So the question of when to fork these systems could be answered by when the implementation of the system differs from its specification, in which case you have a very clear bug. Obviously, specifications also give you these other side benefits. They can help us find bugs and they can help us generate tools that we use to analyze our contracts. All right, so let's talk a little bit about the work that's being done here. Both of these slides are gonna have later talks on them, so I'll be brief. One of them is this tool called Oyente out of the University of Singapore, I believe. And what they did is they took the semantics of the EVM that was very nicely laid out in the Ethereum yellow paper, which I think is an excellent document and a great way to sort of set the standard for the rigor that we expect from these systems. And they built some tools on top of it to analyze the blockchain to try to discover bugs in smart contracts running in the wild. So I highly suggest you read the paper. I love the work that this team did. I think it's not perfect. I do think that there are many false positives that sort of come with their approach that we can further refine and get rid of as these tools evolve. So if you wanna talk more about this work, you can ask questions at the presentation that's coming up later in the event, or you can find me afterwards. All right, here is another great example. Dr. Christian Wright-Wiesner, I hope I pronounced that right, with a Y3 sort of backend for formal verification in Solidity. So here you can see a very simple funds contract and a specification over this contract that was presented at the IC3 Ethereum bootcamp earlier. And this contract allows you to withdraw a certain amount of shares and comes with the specification that the number of shares preserves the balance of the contract. So oops, that was the wrong direction. So here is another contract that satisfies this specification. And this sort of helps us illustrate that writing complete specifications is pretty difficult. So if you're a trained developer, you're sitting here looking at a set of specifications, just like writing buggy code, there could be something that you miss. Now this is not intended to say that this work was intended to catch these sort of cases. This is obviously a trivial example. But the general point is anywhere you look in the formal verification space, writing these specifications is extremely difficult. So the gaps in this work, specification is hard. Some properties can't be specified at all. I'm thinking specifically about properties of incentives, game theoretical properties, and higher level properties of smart contracts that are pretty much impossible to specify. Often when you have a proof of these kinds of things, you are trusting the tools that outputted the proof, which is not an ideal situation, because these tools are very hard to audit and they require significant expertise. We don't even really know how to audit them in general. A key takeaway I'd like you to get from this is that any good tool must use some sort of semantics, that is a definition of the lower levels of the language and the infrastructure on which they operate. So when you're trying to evaluate one of these tools, ask yourself, what semantics are they using? How is it audited and what guarantees do we have that that system will actually conform to those semantics? Right now, to really formally verify complex systems, we need experts, we need multiple PhDs. It's not an easy problem. And there are also some famous incompleteness and undecidability results that may suggest that gaps in these logics could require more work to achieve a robust system. All right, so escape hatches. This comes out of the need of not always being able to verify our contracts. So what we need here is something they figured out very early on in safety-critical systems, which is we need to put a human in the loop at some point. It's true that the promise of these systems involves, in many cases, taking the humans out of the loop, and I think in many cases that can be achieved, but we shouldn't necessarily take that as an absolute. If verification or bounties or the other two prongs of the approaches fail, it's important to have some sort of fail-safer backup mechanism. There are a lot of parallels to legacy contract law here, so there are a lot of lessons to be learned from how people have done this previously in contracts and also parallels to safety-critical systems. So I'll pose the question to the audience, would you write a nuclear power plant software that has no kill switch in it? Probably not, and I wouldn't want a multi-hundred million dollar contract not to have one either. So we've been doing some work on this in IC3 by Bill Marino here. I highly suggest you read the paper for more information since I don't have much time to discuss it. And what we did here, Bill was a lawyer by trade and got a master's in computer science and he looked at parallels between smart contracts and legacy contract laws. So he looked at all of these concepts of termination, rescission, and modification that are codified in the law that have been designed to solve problems that have come up in historical precedence. And he wrote little Ethereum libraries sort of mirroring these concepts in traditional contract law. So if you're writing a high-value smart contract, I highly suggest you take a look. So this is again not problem-free. We've got a bunch of gaps in this approach. How do we verify that the escape hatch code is correct? What layer do we put the escape hatches in? Do we want escape hatches in the EVM, which would inherently be sort of less general since you'd have a specific set of hatches? Or do we want to put them in the compiler, which lets us save some generality by deferring the problem to the language? Or do we want to just say contracts, use well-audited libraries of escape hatches that you can modify as you see fit, giving us that full generality? So it's an open question. There's also a lot of potential for abuse in escape hatches. So a bad actor could perhaps trigger an escape hatch in a case where it's profitable for them even if it doesn't follow the intent of the escape hatch. That's something we always sort of need to keep in mind. So can you think of a badly made escape hatch? There have definitely been contracts before that have intended to provide a mechanism for users to remove funds that have then been abused to drain all of the funds out of the system. I won't mention specifics, but it's online 666 of the contract that I'm talking about. It's also worth noting that forks of the system in general can be viewed as an escape hatch. Whenever there is a catastrophic failure involving multiple aspects of the system, you sort of get the social consensus-based escape hatch that comes with the backend architecture of these systems. All right, the last prong of our three prong fork, bug bounties. So we believe bug bounties are very important because the incentivization structure of developing exploits is pretty broken without bounties. So what do we mean by this? In security, traditionally we say it's never impossible to exploit a system. It can only be expensive. And when you have a cost of exploiting a system that's actually much lower than the amount that's to be gained from exploiting it, an exploit and an attack will pretty much inevitably surface. So in order to fix these sort of issues, the general security community has recognized the value of providing bounties to incentivize people to defend the system financially rather than only financially incentivizing attackers. Without them, defenders are really only motivated by ego or maybe goodwill, which, as we've seen in the past, are not really powerful motivators to do the type of work that's required for a full security audit. So the good about bug bounties, we have the poor man's formal verification or as we like to call it, decentralized censorship resistant, anti-fragile incentive compatible crowdsourced verification. You can tell that one to your managers and I'm sure they'll be glad to fund your escape hatch development. All right, so we got some cool work going on in this space. Some of my favorite work is the idea of using these prediction markets that we're developing as sort of canaries for when an exploit is about to happen because attackers will naturally be incentivized to reveal this by taking a position on the prediction markets in that direction. So that's sort of the more fringe work that's going on. We also have work that directly addresses these issues in a more traditional way. So there's a blog post by Simone de la Riviere. Again, I hope I'm pronouncing that right. On Medium called Asset Guards, which I highly suggest you take a look at, that details one potential approach for escape hatches. There's also escape hatches mentioned in the Ethereum, best practices for smart contracts documentation. And there's a user called ThouChallenge on Medium who's posted a series of really small contracts that contain common patterns and asked the community to try to break these contracts in order to reveal underlying language flaws or issues in our coding practices that we haven't really yet thought of. There's a lot more work going on in this space. If you Google escape, sorry, bug bounties, Ethereum, bug bounty smart contracts, you'll come up with a ton of stuff. So there's a common problem in academia and in general that you have all of these great solutions to the escape hatch problem and all these great proposals and you consider the problem solved. And I would urge you guys to avoid that line of thinking because there are still gaps in this bug bounty approach. So with prediction markets, there's always the question of how to avoid bad incentives and how to prevent incentivizing attackers and bad actors from actually deploying their attacks on the main net and profiting. There's also the hard problem of how to create trustless bounties with trustless payout. We don't wanna leak these exploits to the testnet because once you leak the exploit to the testnet, you've removed the incentive for defenders and attackers can take that exploit and deploy it on the main net. We also don't want to trust contract authors to pay out because that significantly harms the attacker's incentive to report the bug if they think it's a really profitable bug and the contract authors may just invoke a kill switch or fix it before they have the chance to get paid. There's also the very, very hard problem of how do we define conditions for bug bounties? How do we know when an exploit is a valid claim of a bounty? We have a bunch of academic solutions here, seal glass proofs. We have a paper about that on our website that basically involves using trusted hardware to deliver a proof of an exploit without revealing the exploit itself. SGX also leverages trusted hardware. Maybe zero knowledge proofs could be useful here, et cetera. Bug bounties are also hard to do for subtle issues, things that I talked about before, game theoretical issues or fundamental flaws in the architecture of your contracts. So like our other two prongs, they're not a silver bullet. Also, I urge you not to forget traditional software engineering tests, buzzing, static dynamic analysis, phase deployment, upgrades, smaller test contracts. These are all very valuable things. I don't think I need to talk about them. Pretty obvious how to use them and I highly suggest you cover all of these bases if you're writing a high-value contract. All right, so takeaways. We've got lots of work still to do here. The ecosystem definitely will develop good formal tools. We have a number of talks on this. Think about humans in the loops when you're writing your smart contracts and use bug bounties as a stop gap for verification and incentive fixes. Thank you guys for listening and if you want to interact with us further, please check out our website at initc3.org. Check out our publications and we are always looking for industry collaborations and partners. We'd love to work with you to solve the open research problems in the space. So definitely contact us or find me somewhere if you have questions about anything I've said here. All right, thank you very much. Thank you.