 So, me and Leah are going to be talking about formal verification of Solidity contracts and a little bit of smart contracts in general. Some of the ways in which the language and the compiler can assist in doing this process and also talking about ACT, which is a language for specifying the behavior of smart contracts. And I'll try to keep this relatively brief, leaving a lot of room for discussion about what can be done to improve the workflow as it is right now. So, to begin, I'll just position this question of how to specify the behavior of Solidity functions in context, sort of opening up for the discussion. One way in which you can do this is to use asserts directly in the function source. And if you are not using annotations, but rather referring to the variables directly in the Solidity code, this is how that can look like for a transfer function. You would be asserting that the relevant variables are changing, expressed in pre and post conditions, and it looks something like this. But if you don't want to do it directly in the function source, an alternative would be to have like a wrapper function. And this relates again to the discussion we just had on conventions for testing frameworks, because this looks very similar to how you would write a test if you are writing your tests in Solidity at least. And, yeah, I think it's sort of straightforward what's going on here you do some caching of variables before you execute a certain function and then you compare those values according to some condition against the state that you get after executing a function. And then there is the alternative of specifying the behavior in a language in a different language altogether. And so here, this language is act which I'll be going a little deeper in during the next couple of minutes. So, I think this is a good example to just get a flavor of what the syntax can look like if you want to really highlight everything that's going on inside a transfer function so it should be noted that this function makes a little more of a precise claim on the previous two. In the previous two really only talking about the updates that happen to the balance of mapping. Whereas here, we're also saying under which precise conditions, these updates are happening we're saying that call value should be zero since this is a non payable function. And we're doing the doing the check that is done implicitly here by the safe map and we're also immediately forward to consider the edge case of what happens when you're transferring to yourself. So, in aspects who are forced to split up every claim that refers to mappings depending on whether you have a collision in those mappings or not. So, I'll get more into that in the next slide I think but let's just talk a little bit about the different approaches and sort of the pros and cons for them. So, for the first two obviously a big advantages that it's integrated and the developer doesn't need to learn an additional language however obscure you may find it, but can just work directly in solidity. So, if you're writing your properties of the function directly in the source. Things are also pretty transparent or auditable by people reading the implementation, because they'll see your claims about what the behavior is supposed to be. They can either test it for themselves with their own frameworks or they can just sort of get a better understanding maybe of what this function is supposed to do. If they prefer reading this this functional description over the sort of implementation but it's good. An advantage of the the latter two is having this wrapper spec for the specification in a language completely outside of solidity is that it's more agnostic to the particular implementation. If you were to change how the nature of transfer were implemented you could probably still reuse the same testing function. So, this spec, but if you want to change how the implementation. Does what it's doing according to the spec then you'll be forced to rearrange your spec to probably, or it will be forced to rearrange the claims that you're making inside the implementation. So, you can get a sort of here is now you're editing both what the contract is actually doing and the description of what it should do in the same in the same development process. And so another advantage of using a different language that is designed to design for the purpose of specifying smart contracts is that you have these. You can make decisions that are more aligned with how functions are described independent of how they are implemented. You don't really need to care much about what is the most gas efficient way to do something, but you just want to describe the results of whatever implementation you might have. And also, as I mentioned, there are certain features of the language that are designed to lean the developer into making sure that they really think about all of the edge cases that may exist in the implementation. And are forced to think about them exclusively. Also, when when one has an outside language that sort of exists beyond the scope of the EBM, one can talk about all of the variables involved as true integers, and not just EBM words that wrap around to the 256. And as you can see here on the last line of the final leaf block. We're able to express that the addition of value to balance of two should not overflow. And we can express that by explicitly giving the bound of two to the 256 even though that is essentially unexpressible. So if we are only allowed to use it twice addition. Yeah, some additional points there that I think are obvious. One big approach that I didn't mention here is of course if you do sort of saucy verify approach where we use annotations instead of using the source code directly. So, to summarize the, the essence of act. It is designed to be a human readable function, human readable specification language in which you can express function descriptions and contract invariance get to that later, which from which you can generate proof obligations to multiple back ends. So, there's a large number of formal verification tools that are emerging along there has been for a while that we're getting very good right now. And it's really curious to try these different tools up because they have certain strengths and weaknesses and are also operating in slightly different domains. And so act can be used to try these test tools against each other and compare the results, but also utilize the different tools that are targeting different domains. So, when you're working with these source level specifications or development tools. There's a risk of the compiler doing something that you're unaware of or behaving unexpectedly and this is sort of a blind spot for the formal verification process. But then on the other hand if you're doing by code verification. It takes a very long time and can be difficult to do stuff like contract requirements. So act is an attempt of modularizing the process of doing formal verification for smart contracts and being interoperable with these different back ends. Our first couple of plans for different back ends will be targeting is a cockback end for claims that are difficult to prove using SMT solvers and really require manual groups. SMT queries for doing contract invariance and certain checks that are easier to do in the SMT site. And for bytecode verification or to have a prototype of exporting proofs to KVM, and there's also an HVM back end networks, which will be faster in in many cases, but slightly less general. And so to go a little deeper into this point about the modularity of proving tools and verification setups. Here is a more in depth example where we are expressing a invariance in the constructor of the token. So this also touches on a point that I'll get to a little later, where here we have this sum construct which is just not native to to Solidity or even something that could generate any reasonable bytecode but it's very useful for formulating properties about what we expect our contracts to do. So yeah, here's, here's more of an example, and I guess one point that I'm trying to make here is that I think this language is pretty similar to how you may express the nature of what you want a token to do in a setting like you're writing an ERC. When I recently was writing an ERC for for permit. I wrote the description and the specification of how this new function and token extension is supposed to behave in a language that was very similar to this because I think that it really allows for people to make their own implementations and have freedom in how they decide to optimize their particular behavior, while still being completely unambiguous in terms of what the function should do. So, as a result of this it means that if you're writing an ERC in this language, but only will it be clear for people to read and implement but it can also be something that you need to use to test and formally verify that the ERC's are actually implemented in the correct way. Yes, so here's a little bit more about how it works. Is there a question or just a sign. I'll keep going on. So, I'm not going to go into too much detail here and just kind of say what I said before, but that the design decisions of actor made in such a way that you should be able to really think about all of the. There was actually a question. There was a question. Should I go to the Gator chat or like someone raise their hand here who was it. Just like it was me. So, what are the limitation of the language. Can I, for example, describe my property over a set of function, like something should hold after calling F1 and F2 or something like that. Yeah, so we are. So actually the syntax that you saw here of invariant is a special case of an invariant that holds over all functions, but there's also a syntax where you can do invariant of and then functions. So, so yeah, it's a good point. Thanks. Right, so. So here's I guess a more concrete way of expressing what I mean by doing this modular approach to verification. If we take a an example smart contracts. This is going to be a contract that I now only express in terms of accent text, but you can probably imagine the facility implementation, it should be fairly straightforward to see. So we have some sort of, we have three variables that it's dealing with X, Y, Z, and an invariant that we expect to hold from them. If we have two functions, which simply updates to our variables and mode by multiplying it with a scalar either multiplying X and Z using a function F or updating Y and Z using the function G, then this generates a cog theory which is beyond the scope and sort of agnostic about the blockchain setting that this smart contract is operating. And so it has sort of the essence of this contract. And it allows you to get all of the relevant definitions over which you can do the proof of safety that that you have expressed in the, in the accent text. So, all of the boilerplate that comes with generating the definition and the theorems here of the smart contract involved would be generated automatically, and you simply insert the proof. I skipped ahead a little bit. Let's see if there was something we missed. So, I guess I showed it now by example rather than explaining the details of what I meant to say I'm essentially saying that usually when you're doing a formal verification. This sort of end to end formal verification process where you have some smart contract business logic sometimes people refer to or high level specification, which you expect certain properties to hold up. You can make proofs on a high level, which aren't referring to any bytecode specific or blockchain specific parts. You can also perform the bytecode level proofs with the same specification. If you're using act. So, this refinement that the higher level properties that you're proving also hold of the lower level by code that you get from the compiler is more of a property that either holds, or doesn't help the back to cool well hopefully it holds if we do everything right. That you need to do for each project. Right, so here's just a little list of some motivating properties or motivating examples for why you may sometimes require something more complex than just SMT solvers for proving correctness of your smart contracts, simply because there are certain properties of smart contracts that that express the correctness of them that are too difficult for SMT solvers to do. And this is a good example, which you can actually prove using SMT theories, depending on which strategy you take to doing it, but I think doing something more complex would be out of scope for this. Okay, so, in conclusion, we need more time for questions than and discussion rather than presentation. In doing acts, and in writing specifications for smart contracts using act, you get this language improver agnosticism. So you can compare different provers but you can also compare different implementations same logic or different languages, the same smart contract so you can compare the byte code generated by first byte per se, versus the byte code generated by facility against each other. Actually just just as a note on that. This was the subject of a presentation that I did at last step on where we had similar to the previous slide a specification for in years to 20. And the, the talk was more of a hackathon where people were competing to make the most gas efficient year C20. And this is sort of a safe way to do these gas both competitions because now you know that not only can you have these crazy optimized versions that are supposed to do something. You can have some tests we didn't get to check it against but you can actually do formal verification against the bite code that is generated. Yeah, so it's a very, very good measure to have if you're really keen on making crazy optimizations. Yes, so this is act please come to the Gitter channel to discuss it if you're interested. So just to keep this sort of focused on on solidity. Here are just some pointers that will and asked really to to Salci and the developers of maybe something that was discussed already during the debug session I'm not sure I was unable to attend that unfortunately. So the ask is essentially this and it's related to bite code verification. Basically, the, this situation is that when you're doing bite code level proofs. Often you have a pretty performance heavy duty because you're symbol of executing. And there's a lot of things that can happen. And in order to scale this process. What one wants to do is to reuse proofs of subsets of bite code as lemmas and other cooks. So if you already know that a particular internal function can be summarized to do this, then you can reuse it whenever you get to that part of the bite code. So this is sort of tricky to do using solidity right now because it requires a lot of manual labor to extract the relevance relevant pieces of bite code. So this really helps and the source map really helps, but what would help even more is to have PC values directly from specifically internal functions and modifiers. I have a method of extracting it now which involves, you know, combining the AST and the source map but it's very unreliable. And I think direct support for this would be beneficial for other tools as well. Yeah, are there any other things that want to say, not really anything more than what I said here. Oh, well, maybe one thing. There's a variable location at that function heads and loops could also be valuable. Can you be a bit more specific on that so what also on the PC values, what exactly do you mean. So essentially, what I have been doing before is when doing a proof about an internal function is that you go to the PC value that represent the jump test with this function begins and usually if it is a stack based function you have the arguments in a nice and stack in reverse order to how they appear in the function declaration. And then you can do your proof like that it becomes more complicated if you have a function which involves memory and also, if the function comes from a different contract that has been imported somehow, or inherited from then getting the the location of how these functions relates to the bytecode. It's kind of true. It's like their business. That's so stupid. Why is there even a hang up button. It's raising his hand. So, you would like to finish up on that. So, you would like a mapping from probably a ST ID of all functions to the entry point as a bytecode offset. Yeah, okay. Cool. Thanks. Can you direct or sort of say who. Yes, just let us go ahead. Yeah, nice talk. So, could you describe when act start and stop typically like is it a DSL with just an output. So does it do like more stuff like checking if the variable, like acts variable or the same as contract variable and stuff like that. So, it is right now a generating from from the specification language adjacent output which were sort of intermediate representation which can be used for to plug into different packets. So it will be supporting some back ends internally without doing this. But there's also the option of. So to answer your question I think it's about whether the source code needs to be available for these groups to go through or not, or at least this seems to be part of the question. And the answer is for some back ends it will be necessary. For some back ends you will be required to do additional work if the source code is not present. So, a newly introduced feature, or new, but in a feature now of this with others that you can extract the location of storage variables, and then it's very, very nice to write specifications like this where you want to do by verification because you can simply refer to them by the name and if you have the source code you know what those storage locations are going to map over to in terms of EDM locations. So if you don't have the bike code, or if you don't have the source code of a contract when you still want to make this bike for local proof you would need to be more explicit about where these different locations. So if you don't have the source code, so just an output is going to work directly the storage location. Yes, there's a way to. Thanks.