 I'm Rain Revere, and I have a particular interest in tooling and security, and especially how we can use tools and education to evolve our ecosystem and create a healthier community overall. So as I mentioned on the panel, there are some heuristical approaches to visualizing and discovering where there are vulnerabilities in your contracts, and that's what I'm going to focus on. We've had some great examples of formal verification, and that's the kind of work that absolutely needs to happen in order to improve security in our field. And there's also some low-hanging fruit, and there's some tools that you can use in order to get some easy indications of where your contract might be failing or might be a security vulnerability. So I'm going to give a couple examples of common attacks similar to the last presentation, and I'm also going to talk a little bit about the different types of developers in our community and kind of offer a more human lens of how we talk about security and how you can listen to others talking about security and figuring out where they're coming from. So the big question is, how do you spot smart contract security vulnerabilities? And so visualizing security, on the one hand, I mean it in the literal sense that we can create tools that give you a visual representation of where a smart contract might be vulnerable, and in another sense, I mean visualize in a broad way to see what is there, to see what you didn't see before, and any security vulnerability could be described as something that was there in the code that you didn't see before. So anything we can do to help you see what's already there is going to improve security. So some common attacks, I'll cover this very briefly because you saw more detailed examples in the last presentation, but array griefing or a denial of service attack, if there's another function that adds an item to that array, then it can get so large that iterating over it will never complete. You'll run out of gas, which means any function that iterates over it potentially could be blocked, that could be fatal for your contract. The re-entrancy attack should be call.value. The re-entrancy attack allows any external contracts or any external call is free to call back into any public functions. So you have to be really careful. You really can't rely on the state, as was mentioned before, for a re-entrancy attack. And of course, underflow, overflow, very simple one, but if you have an unsigned integer and you subtract a larger number from it, you're not going to get a negative number. It's unsigned. You're going to get overflow or underflow, and so instead of getting a negative number, you're going to get a massively large number, and that could really change the logic of funds that you're checking to see you're available, if a balance is available, that sort of thing. So what do each of these have in common? Just at a surface level. They all have specific code smells. So code smell, it doesn't sound like a very formal term, and it's true. Code smell is just some indication that there could be a problem here, right? And even if that's not a totally formal thing, that's enough for us to investigate. It's enough to say, yeah, we should look into this. So if there are code smells, it means if we can detect those code smells, we can help prevent those errors. Now there's a couple ways of detecting code smells. I'm going to talk about them. Static analysis is a method of testing and evaluating a program without executing its code. So this is huge. If you can look at the source code of a program and you can make some observations about it, that's going to be really powerful and give you ways to detect some of these possible problem areas. So the first step in static analysis is parsing the code. So you have your source code, and what you want to do is take that source code and get out an abstract syntax tree. So an abstract syntax tree is really, you can think of it like a map of the code. So your code has functions, has state variables, and if you have a map of all of those functions that you can explore programmatically, then you can write code that analyzes your code, code that reads code. That's static analysis. So there's a lovely community-based project called Solidity Parser, and this is probably the first step towards doing any static analysis is parsing the source code. And when you parse the source code, you get an abstract syntax tree, and from that abstract syntax tree you can do quite a lot of things. The one that I want to share with you today is a library that I created that was an experiment to see can we look at smart contract code, and can we determine where those untrusted calls are made? If any send or any untrusted call is a potential security vulnerability, what if we could easily look at the smart contract code and tell where those calls are being made from? So I created a small library called SolGraph. The input is the abstract syntax tree that we got from the parser step before. And the output is a dot graph, something that can be made into a visual form. So let's say you have a simple contract. Here's an example of a non-standard token, but you have a constructor that you can... It creates a million of those tokens, and then you have some functions to withdraw or get balance. So key points here, the withdraw function is clearly insecure. You have that external call, anything can happen with that external call, and in fact there's no checks against that, so that's a problem. But you can't really tell that unless you're a developer and you know to look for these things. But then other things have no risk, so the get balance function is constant. A constant function can't change the state of the application. So certain functions are no risk, certain functions are high risk. Let's see if we can have a tool to more easily differentiate those and to pay attention to what's important and ignore what's not important. So SolGraph, as I mentioned, is intended to take that abstract syntax tree and output a dot graph to easily visualize these functions. And that's exactly what we get when you run this code through SolGraph. Here we have the output from SolGraph, and this shows each of the functions in a circle. And you can see the mint function is in gray, that means it's internal. So we know that that can't even be called externally, that's helpful to know. And the get balance function is in blue, which means that it's a constant function. So again, state can't be changed. That's pretty good to know that there's not a vulnerability there. The withdraw function is the one we really want to pay attention to. That's highlighted in red, and that goes to an untrusted call. So it makes it quite clear that that's what's happening. This is an image that's the actual output of SolGraph. This isn't something I constructed for the slides. So you can run any source code through SolGraph, and anyone can do it. Even non-developers. This seems really helpful to be able to have an easy way. This is the heuristical approach, right? Not a formal completeness, but a simple way to identify where the potential risks are. If something like this had been run on the Dow, for example, you would have had a visual representation of where this re-entrancy attack could happen. And something that's such a small investment would have paid off in a really big way. So there's also dynamic analysis. I'm not going to spend much time on that. That's kind of analysis where you're actually running the code. So unit tests are an example of dynamic analysis. You run the functions. You make sure that the output is what you expect it to be in certain use cases. I'd like to say that we need more standardized unit testing patterns for Solidity. Every Solidity contract sort of has its own ad hoc unit testing, but some things are pretty common across smart contracts. Access control. What can the owner do that non-owners can't do? What can certain participants, subscribers, people in different roles? What are they allowed to do? And who are the people that the contract should just throw for? That's a pretty standard thing across smart contracts, and we don't have a standardized approach. So this is something that I'm looking for from the community. So that's my technical discussion. And I'd like to share something that's a little less technical, but I hope will help you have more productive conversations about security. And to introduce this, I'm going to talk about what I call the three developer cultures. So different developers have different values, different perspectives. They bring different truths to the table, but they can also have different blind spots. So the first is called the web developer. The web developer often uses languages like JavaScript, Java, PHP, Ruby, Python. Web developers value simplicity, usability, practicality. They want to build a product. I'm a web developer. I want to build something that I can get out there and let users test. The downside with the web developer is that because they often work in very high-level languages, they don't have that intuition for things like an array out of bounds, a spamming attack, or these attacks that are very specific to the system and very specific to the hardware or the VM. Web developers don't have that intuition. So that's a downfall when it comes to security. So they may undervalue those things. The systems engineer, this is a great category because a lot of the folks on the foundation, a lot of the folks doing a lot of the lower-level EVM work, EVMC, they're systems engineers. They know the EVM, they know the hardware in the talks yesterday when you're hearing about 64-bit versus 256-bit, what the implications are. Systems engineers are experts at understanding the implications of all of these details. And they also have their potential blind spot, which is that they can undervalue abstraction, right? A C++ programmer might say, well, everyone should know all these details about memory allocation and the risks that are there, and that's a great ideal. But the truth is higher levels of abstraction allow more people to develop applications and be effective, and we should embrace abstraction. So it's more about finding the right abstractions than getting rid of abstraction completely. And the third group here is the academic. And we have some great examples of formal verification and some approaches to this using F-star, Y-3. These are rigorous solutions, and these are really important to be able to establish mathematically and have proofs for the validity of certain applications. Now, sometimes they can be impractical. Usually it's just that it's hard. We don't have a lot of academics in the world offering these solutions and helping to integrate them into the community. It's a minority group. So even though they have perhaps the most important contributions to the security conversation, this is a small group, and their findings can be difficult to digest. So they can have a lot to offer, but it can be difficult for the rest of the community to figure out how to best incorporate that. And truthfully, I want to add one more. There's a fourth developer culture, and that's the non-developer. And the non-developer is absolutely essential to having any kind of successful product, right? Business marketing, whatever it is, it's a huge part of making this work. And as a non-developer, by definition, they don't know what things are hard and what things are difficult and what things are trivial. So they're relying on the expertise of others. And that's fine. That's part of the system. But we should understand that as a non-developer, they really need to trust us developers. They really need to trust the academics and the engineers. And unfortunately, this group can be the source of a lot of speculation. A great example, I think, was, again, the DAO attack where the price of Ether plummeted due to a bug in a specific contract. This is sort of like if a website went down, like if yahoo.com went down, people saying, oh, God, the internet is broken. So one contract is not the entire Ethereum network. And sometimes those technical distinctions can be lost and can lead to a lot of speculation. So really my point with all this is let's work together. We have different perspectives from different people about security and what's important. And all of those are valuable. In summary, static analysis is a quick way for low investment to detect code smells and be able to look at potential high-risk areas in your contract. Dynamic Analysis offers other tools that are helpful and requires a little more of the unit testing patterns. And then lastly, the three developer cultures is a helpful way to think about the different perspectives and values that are being brought to the security conversation. I'm Rain Revere. Thank you. Come say hi. Look forward to talking to you. Thank you, Rain. Wonderful. Thank you.