 Hello, everyone. I'm here to present KVM, a complete formal semantics of the Ethereum virtual machine. These are the authors on this paper. I am the first author here, Everett Hildenbrandt. This was a joint work between people at the University of Illinois, runtime verification, Cornell Tech, and East China Normal University. So, yeah, big long author list. A lot of people here helped out immensely with all the tooling and fixing issues in K as we found them, and then a lot of people contributed directly as well, and I figure we should acknowledge everyone who helped out with the tooling because that's important as much as the direct contributions. Here's the overview of my presentation. First, I'm going to walk through a little bit about the K framework. Then I'm going to talk about the Ethereum virtual machine. Then I will talk about current uses of KVM and future work that we have. Here are some repositories that you might be interested in after the presentation. First one is KVM itself. The second one is K itself. Then runtime verification's website. Then our verified smart contracts repository, where we put all our open source verified smart contract code. You can use that as a template for verifying your own smart contracts if you're interested. So first, K framework. The vision is language independence. So we want to separate the development of PL software into two orthogonal tasks. The first one is building the programming language. So you have a programming language expert build a rigorous definition of the programming language, and they can incorporate all their knowledge that they have of programming languages in doing this. Completely separate is the tooling around the language. It's not completely separate. You want it to know about the language itself. For instance, people might make a parser for C, they might make a parser for Java, and there's a lot of similarities between them. So you kind of want one parser that you can instantiate to all the languages. Same with your interpreter, debugger, compiler, model checker, and verification engine. So the idea of K is we separate these two tasks into two separate tasks, and then we can focus on building the tooling around all the different languages. And as people need, they can build the actual languages that they need. This has worked pretty well so far. Here's kind of a visual representation of the same concept. Here you have your formal language definition, and from that we derive the interpreter, derive the parser, derive the test case generation, deductive program verifier. So these two right here, the compiler and test case generation, are kind of more in the vision phase right now, and some of these actually exist. So there's kind of already a lot of tooling that we have around all these languages that we have semantics for. Here's some examples of some of the languages we have semantics for, and their primary usage that we have for those semantics. The first one is C, where one of the primary uses we have of that is detecting undefined behavior. We have the semantics of Java, where we can detect RACI code. We have the semantics of EVM, where we can verify smart contracts, the semantics of LLVM for doing compiler validation, the semantics of JavaScript for finding disagreements between JavaScript engines, and the semantics of P4, which is the software-defined network data layer verification. I included that one in there to show that you don't necessarily have to stick to just programming languages when defining the semantics of things. Just arbitrary transition systems are also good. And there's many others, less used semantics, but you can go to our GitHub organization to see all the ones we maintain, and then there's several that other organizations maintain as well. Okay, so let's get right into some of the details about K. So we build concrete syntax of the programming language using this EBNF style. Here we're declaring a new type or sort, as I'll call it, called XP, where we say that all integers and all identifiers, which are just essentially lowercase names, are of type expression, but also we can build expressions as multiplications of expressions or as additions of expressions. And here, notice that this right arrow here means that plus is looser binding than the multiplication, so then we can parse expressions correctly. And here also, we say that you can put expressions in brackets using parentheses, and this bracket annotation basically says that we don't actually care about the parentheses. The statements of our programming languages are pretty simple. We have assignment, and we have sequencing, so we can do an expression, a statement followed by another statement, and we can return. So now we can, just from this grammar right here, we can correctly parse a program like this. A, which is an identifier, is equal to 3 times 2. B is equal to 2 times A plus 5, return B. So we're not actually doing anything with this program yet, we're just able to parse it with this part of the grammar right here. And notice also here I don't have to put any extra parentheses because I said that the plus is looser binding than the times. You have to tell K a little bit about the structure of your state for your programming language. To do that, we declare what's called a configuration. The configuration here has three cells, the K cell, which will contain the initial parsed program, as indicated by this dollar sign PGM. The environment cell contains bindings of all the variable names that show up in the program to their store locations, and the store actually contains the values that the variables are bound to. So here's some example rules, which are transitions in this transition system. Here, for example, is variable lookup. Here, if at the front of the K cell, we have some identifier. So here this would happen if we were trying to execute this statement right here. Oh no, not this statement because there's no identifiers on the right-hand side, but this statement right here, when we tried to essentially execute this A right here, what will happen is A will be pulled out to the front of the K cell using some K magic that I haven't gone over yet, and I won't go over in this presentation, so you can see the tutorial for that. Once the identifier is at the front of the K cell, we look up in the environment where the correct store location for that variable is, and in the store, we then look up what the value associated to it is. What I'm not showing you here is the negative rule, which says what if X is not in the environment, or what if SX is not in the store, but you can make a rule for that as well. Basically, what we do is if we're trying to look up a variable, we just replace it with its actual value. So here we've grabbed the value from the store and we replaced it right there. Similarly, for assignment, instead of lookup, we have some variable that we're assigning to here and we're assigning some integer to it specifically. Notice we do the same thing, we go through the environment, we look up X, find the store location, and at that store location, we take the old value V and replace it with the new value I, which comes from here. Notice here there's two rewrite arrows right there. Here there was only one rewrite arrow, so there was only one small change to the overall state, but here the two rewrite arrows means that these two changes happen simultaneously to the state. So we consume this and replace it with nothing at the same time as updating the value in the store. I'm going to walk through an example execution to really drive this home. Here's our program, our initial program, A equals 3 times 2, B equals 2 times A plus 5, return B. Here is our initial configuration where I have set up some stuff here that would have to actually be set up, but I've just set it up. So A is pointing at store location 0, which is initially value 0, B is pointing at store location 1, which is initially value 0. Here's the entire program. The first thing that's going to happen is we're going to take this semicolon right here and we're going to turn it into this squiggly arrow, which means followed by mk. So basically that means we're focusing in on this part of the computation as opposed to right here where we're focusing on the whole computation, we're kind of just focusing on this part. Notice I've also simplified this 3 times 2 to just a 6, which is a step that would happen. But now if you inspect this, you can see that this x colon equals int matches this A colon equals 6. So then x is A, so we look up A in the environment and it's pointing at 0, so sx is now 0, and we look up 0 in the store and it's pointing at 0. So this v goes to 0, but we're going to replace it with this i right there, which is this 6. So in the next state, we will see this 0 is now a 6. And indeed there it is. And you'll also see that this goes away to nothing. Boom, it's nothing. Okay, so now we have the rest of the computation to deal with. And so I'm skipping a bunch of steps here, but basically what's going to happen is it's going to evaluate this sub-expression first. And to do that, it's going to first focus in on the 2, it's going to say the 2 is already evaluated, then it's going to focus on the A, and it's going to say the A is not yet evaluated. And that's what's happened here. The A is not evaluated, and it's followed by B with this hole in it marking, okay, when you do evaluate that A, put it back in that hole. But now we have an identifier at the front of the K cell, which is x. So once again, we say, okay, what is the store location associated with that A? This is the store location. Then we look at the store right there, and we get this 6. So in the next step, this will be replaced as per this rule with the 6. And we got the 6 right there. And then basically we're just going to plug this back into the hole here, which is stuff that K will do for you. And so we do that, and now we have B is equal to 2 times 6 plus 5, which simplifies out to B is equal to 17. And then we do the variable assignment rule again to now update this location in the store with 17. Okay, so that was kind of an example execution of K just to kind of step through and see, you know, the main thing to think about is this followed by arrow, but I just wanted to make sure that people are at least familiar with K syntax. Now I'm going to switch topics completely and talk about the Ethereum virtual machine just to give people an idea of what the challenges there are. And first I'm going to talk about what a blockchain is just the basics of what I need for this talk, basically. So a blockchain is an append-only ledger of transactions submitted by users. The transactions are usually just transferring some sort of value. So, you know, I might say I want to pay Party X some money and Party X might want to say they want to pay Party Y and then both those transactions get recorded in a sequence that eventually records that, you know, Party X has this much money and Party Y has that much money. So yeah, so like Bitcoin, for example, can do simple transfers of value and some simple logical things that are kind of hard-coded into Bitcoin. And then miners basically select which is the next pool, the next block of transactions to include on the blockchain and that's how we achieve consensus. So I'm not going to talk about consensus algorithms here. It's also an interesting talk of it, but it's not necessary for what we're talking about. What Ethereum adds on top of Bitcoin because most people are more familiar with Bitcoin or have heard of it than Ethereum is, first of all, it's a different currency. So there's, you know, a different exchange rate and different market. The accounts also have an associated storage and code which they can modify and read from in a transaction. So an account might be able to, when an account initiates a transaction, it's able to say I want to change my storage location three to have this value instead of this other value. So other accounts cannot modify or read storage locations other than, you know, just inspecting it externally because everyone has to know all parts of the blockchain state. Anyway, it's all public. So I can look at an account and say, like, oh, I can indeed see that it stores location three. It has the value four. But if I execute a transaction, unless I'm the owner of that account, I can't in the program directly read that value. So it's kind of this permissions thing that's going on there. The same with the code, only the account itself can kind of, you know, read the code to execute it. So you have to have that account's permissions to execute its code. Yeah. So the transactions can also have associated programs written in EVM, which is the Ethereum virtual machine. And if you're a cool kid, you'll call these programs smart contracts or just contracts if you get sick of saying smart all the time. Miners then, you know, because we now have these associated chunks of code with the Ethereum programs, the miners have to execute this transaction code to calculate the new world state afterwards. So I might send a transaction that says, in my storage location, update this value to this new value, and the miner needs to check, you know, okay, did the update actually happen, and then it needs to store that in memory because it might affect the results of a future transaction. So there's kind of this stateful component to it where the miners have to hold around all this extra state to decide whether future transactions are valid or not. So that's a big difference between Ethereum and, you know, a different system like Bitcoin is that you have these programs that sort of stateful component to them when they're executing, okay? Here's an example, EVM program. It's a very low-level bytecode. So it's a stack-based bytecode over 256-bit words. This program sums the number from 0 to 10, basically. So I picked this program because it already demonstrates kind of a lot of the pains of EVM itself. First of all, like, just looking at this, I don't see that it sums the numbers from 1 to 10 or from 0 to 10 in any way, shape, or form, but it does. So here what I'm doing is saying at memory location 0, store the value 0. So this is the current sum. At memory location 32, store the value 10. So you might ask, like, why don't you store it at location 1 instead of 32? And it's because EVM words are 256-bits wide, but when you use this opcode mstore and the kind of dual opcode mload, it just bytes instead of over words. So this actually ends up being a major pain. I'll talk about it a little bit later. Here's the marker for the beginning of the loop head, which is going to loop through. This is basically the condition of the loop. And then if the condition is false, then we jump to the end of the program, which is this is the marker for the end of the program, basically. And then here we are loading the current the current sum, we're going to add to the sum. We're adding them together, and we're storing it back in the sum. And then here we're loading the current counter and decrementing it by 1. Decrementing it by 1. Yeah. And then storing it again, basically. And then here we're jumping back up to the top of the loop. So notice here, we're not saying like jump, and then some string that says loop head, like a sensible IR that has labeled jumps instead we are pushing the value 10 and somehow you just have to know that this is at opcode position 10 based on the width of all these opcodes before it. So you might think initially oh, it's opcode position 6 because there's 1, 2, 3, 4, 5, 6. But that's not the case because these aren't all necessarily 1 byte wide. That's kind of one of the nuisances of the EVM is you have to calculate these jump positions manually or something. It's really kind of a nuisance. Then there's this other component of execution where if you submit a transaction that is infinite, just has an infinite loop, you could just say while true, add 1 or something. Then the miners would just sit there trying to execute it and figure out what the state update is because they need to know the state update to know if future transactions are valid. So we have to have a way to prevent this. So this is for DOS attacks. The question is that each opcode is going to cost some amount of gas to execute. So add costs 3 gas or something like that. And gas is convertible with Ether at an exchange rate decided upon at transaction time basically. Any unspent gas is refunded back to the original payer for the transaction and any spent gas goes to the miner. So it incentivizes the miners to kind of pick up transactions and execute them and actually include them in the blockchain. There's a couple of notes here that I'm putting down because you might think, okay, gas, sure, we'll just make a gas model and assign arbitrary values, but it's actually really important that the gas cost charge according to the actual used compute resources as in like the physical silicon how much energy and time and memory went into doing the computation. It's very important that the gas actually charge for this, otherwise you could open up attack vectors basically where you submit a computation that doesn't cost very much gas because you are clever about how you structured it, but ends up using just a ton of memory resources and that would be a way to DOS the miner for instance. Another thing is, you know, new hardware is always coming available so tuning gas cost is an ongoing challenge. It's something that you have to constantly be able to do. So for instance what happens is hard forks roll out basically hard fork just means changes to the network roll out which update the gas cost of various op codes as they find out that they were under cost or over cost or something like that. So gas is something that people kind of dismiss at first but it is a very important kind of ingredient of the whole executable blockchain paradigm. Okay, intercontract execution is another kind of sticky point with the EVM that you know, maybe could have been designed a little bit better. So you know, we want contracts to be able to call other contracts because we want library code and stuff like that, but the way that it happens in EVM allows for re-entancy because basically I can call another contract and then that contract can call back into my contract which might trigger a modification in my state and then it returns the control flow back to the contract I called and then returns the control flow back to me and I might think in my contract there was no updates to my state while I made the call into the other contract but in fact the other contract triggered an update in my own state. This is called re-entancy and there have been a lot of funds lost to re-entancy attacks basically where people have crafted clever transactions which you know trigger kind of a re-entancy cycle and just end up draining funds from contracts. The famous one is the DAO attack. Basically when you call another contract the payload that you give it is just a rostering of bytes. I'm going to call it call data here because that's what it's called in KVM but external to the EVM everyone is kind of agreed upon what's called the Ethereum ABI which is application binary interface I want to say. So the Ethereum ABI specifies calling conventions so how to interpret the call data correctly as well as some high level types and they're mapping to EVM words so it's just kind of a document specifying this is how contracts will interpret this call data when they're called with it because that isn't specified at the level of EVM itself which has pros and cons specifying it above versus in the EVM itself. So I'm going to emphasize again the nuisances versus unstructured control flow you can dynamically calculate jump destinations like nightmare for static analysis engines that want to be able to infer things about the programs based on its control flow. The second is these 256 bit words this one's kind of it's hard to know because it is useful for crypto libraries which often return things that are in units of 256 bits wide but it's really hard to map it directly to hardware for instance so it's kind of hard to decide if that's a necessary nuisance or not. What definitely is I think an unnecessary nuisance is this 8 bit word array local memory that I mentioned before so I remember in this program the first thing we had to store was at location zero and then the second thing we have to store is at location 32 and that's because when we store a 256 bit word it takes up 32 bytes worth of storage which is what this M store stores as so if we're doing symbolic reasoning and we end up hitting an M store we basically have to take this symbolic expression split it into 32 symbolic sub expressions store each of those 32 symbolic sub expressions and then you could even like shift over by some non 32 byte amount read back in and you end up with just this huge messy symbolic expression describing what your what your current state is so it can really kill symbolic reasoning this 8 bit word array thing no built in calling conventions so this this ABI is declared external to EVM like I said this is this could be good or bad it's bad because it means that basically everyone has to agree externally on what the ABI is and also it's kind of fragile so people's contracts might implement the ABI incorrectly and that could open you up to security risks but it might be good because then you kind of have some notion of upgrade ability you can say I am ABI 1.0 or ABI 1.1 so you know not really sure maybe it's maybe there's some that could be done at the level of the EVM itself instead of all of it being done externally and then basically the last one here is this eval capability so there's basically a sequence of op codes you can call an EVM which can load arbitrary data as a program and then execute it and this is once again just a nightmare for both static analysis and symbolic reasoning because basically you have to just assume anything could happen so these are kind of some nuisances with the EVM that it would be nice if we could kind of evolve away from them and all of these nuisances are directly related to security issues some of them are related to security issues like this one because it makes doing various analyses on the program difficult which you need to do formal analyses on the programs for security issues and other ones are more security issues because you could just execute arbitrary code but most of it has to do with actually enabling analyses on the bytecode level okay so now I'm going to talk about KVM what it's used for so once again KVM is the K specification of the EVM so we've specified all the gory details of EVM in K you can go to our repository which is at the beginning of the talk if you're interested in seeing the actual definition itself I figured it was probably not worth the time to explain the definition piecemeal here just this is pulled from the paper we passed the VM test and the general state tests of the official Ethereum test suite which is on GitHub and we're about an order of magnitude slower than CPP Ethereum which is a native implementation of the Ethereum client basically these numbers are kind of old so that number might have actually improved by now because there have been several performance improvements which have gotten into it but these are the numbers at the time that the paper was written so in case you're interested you can look at our paper to see more about how these numbers were collected but you know there's plenty of kind of analysis engines out there for EVM bytecode that don't even bother executing the tests but it's important to us that we have this correctness benchmark that we passed so we execute the tests we make sure that our specification is correct one thing that we have is kind of some light anti-pattern encoding stuff so EVM has this designated invalid opcode and initially it was just thrown in because they figured you know maybe we need an invalid opcode for something or another basically if it's ever encountered execution halts and all the gas is given to the miner so you don't get your gas back if invalid is hit but people have been using it in the high level languages which I abbreviate as HLL here to encode kind of assert false sort of statements so this can be in your solidity code solidity is a programming language that compiles down to EVM you can put essentially false within some conditional and that will tell the solidity compiler this should never happen and then if it does happen something has gone horribly wrong throw invalid and get the heck out because if you throw invalid it will also revert any state updates that have happened during that execution so basically we can just use K because it has a symbolic execution engine to symbolically execute the program and see it end in a state where invalid was thrown this is a pretty really simple technique and it works decently other tools have done similar things with the invalid opcode for example the solidity compiler will try to do some sort of similar analysis by taking its own kind of internal semantics of solidity building a Z3 query and then an SMT lib query and then asking Z3 does it is it feasible that I could reach this invalid opcode so this is a kind of nice technique that people are leveraging more of what you can do with K as opposed to other formalizations is you can do full program verification so I'm not going to go into super great detail here but if you are interested you can look up this paper right here Stephanie Eskut at all 2014 for how K's verification engine works and then there's also some papers on matching logic that you can look up which is kind of more recent more recent formalism for this work so runtime verification is kind of one of the companies that's largely driving the efforts behind K and they offer audits as a service basically and the typical process is first you start off specifying the high level business logic in English and oftentimes developers haven't even done this step they just kind of start hacking away at code so we'll write down a high level English description of what we think their business logic is go to them they'll say no this isn't quite right we refine it we go back to them until they say yeah it looks good and then boom they have a nice text description of what their contract is supposed to do then we take that high level business logic and refine it to an actual mathematical definition this should probably not say of logic that would be kind of a fixed stream but just a mathematical definition of their contract and then once again we go to the customer and we say you know is this correct does this look good and they say once again no there's some bugs right here and we say okay we'll go back and edit your English one and this mathematical one to kind of fix this and then eventually once we've gotten through that process we refine it to a set of K reachability claims and once again we take the reachability claims of the customer we ask them you know does this look good does it look like what you're actually trying to prove about your thing and once they confirm that indeed that is we kind of start just fixing bugs in the contract and in the specification because we could have little bugs in the specification until the K prover is satisfied and then we just send all of that to the customer basically so that's how the kind of iterative verification as a service process goes which is a lot more hands-on than some of the other tools that are being offered there but because of that it catches a lot more things so there are also independent groups using KVM to verify smart their smart contract stack EGDAP hub they you know we help them out they submit bug fixes to K and we help them out with their work when we can but slowly more people are using the K prover independently okay so the reachability logic prover takes an operational semantics as input which in our case we use a K definition to specify that operational semantics and there's no axiomatic semantics required basically it just needs the operational semantics and it kind of turns it into an axiomatic semantics the reachability logic is a generalization of horror logic so if you have a horror triple here pre-code post you can turn that directly into a reachability claim here where the code here had its variables or some subset of its variables turned into logical variables and we use the matching logic conjunction here matching logic and to limit to instances of this code which satisfy this precondition and then we say does every instance that every instance of this code that satisfy this precondition reach an instance that satisfies this post condition so I've written it out in English here any instance of code which satisfies pre- either does not terminate so we say everything is true about non-terminating paths or it reaches an instances of the empty program epsilon which satisfies the post condition so here I'm claiming it's a generalization of horror logic one way that it's generalization is that we don't need epsilon to be empty in reachability logic we can easily just make epsilon an intermediate program state and then it is a generalization of horror logic it's also not really fair to say it's a generalization of horror logic because reachability logic is a logic in its own right it has its own inference system independent of the operational semantics that you plug in orthogonal to that whereas horror logic is more of a design pattern you take some programming language and you kind of build the horror logic for that programming language so it's not really even fair to say it's a generalization as much as more kind of the right way to do it so reachability claims are fed to the K prover as just normal K rules because basically this arrow is just the same as the rewrite arrow but instead of interpreting them as axioms like the operational semantics is interpreted they're interpreted as proof goals basically and so it will use the axioms the operational semantics as well as the inference system of reachability logic which has I think seven rules to try to prove these formulas basically and then a couple things I want to mention relating to security is that functional correctness is directly specifiable as a set of reachability claims and it's kind of the adage at least I've seen a lot in like the Linux or open source community that often security bugs are just normal bugs in your program so if you specify enough functional correctness properties you can recover a lot of high level security properties basically so for smart contract verification we can because every program is terminating pretty much every property we could want we can specify as a reachability property because we don't have to even think about these non terminating instances where we just prove everything true yeah so that's that's most of what I'm going to say about reachability logic prover here is this ERC 20 case study it's in the verified smart contracts repo if you go back to the beginning of the presentation and see it there it's kind of a hello world of Ethereum smart contracts it's very simple contract basically you have in your storage mapping from addresses to values and the addresses are interpreted as you know people's accounts and the values are interpreted as how much of your token that you have so it's it's pretty versatile sort of contract you can make like these little sub economies with your own rules in them because you can just dictate how this key value store is allowed to be updated by the people who own the addresses you can just write whatever logic you want for that but there's basically a core set of methods that everyone agrees your ERC 20 has to implement including like the ability to transfer some value from one of the accounts to the other account the ability to buy some some tokens using ether stuff like this pretty much most people agree so I'm just putting some kind of uses up here you can codify ownership distribution of a company so it's like tokenizing equity this would be like you know if you think about an IPO or initial public offering you can have an ICO and initial coin offering which have caught a lot of press recently but it doesn't have to you don't have to actually do an ICO you can just use it to kind of codify you know who owns how much of a company similar sort of things with like pink slips and deeds for cars and houses you can codify a Ponzi scheme directly and you know tell people to buy into it and tell them it's a Ponzi scheme and you'll watch people buy into it but yeah so I mean I put that there as tongue in cheek just because you know of course you can codify a Ponzi scheme because it's just a logical set of rules about you know how the contract should operate basically but either way it's kind of all built on this simple ERC 20 concept so there were some problems we ran at ERC 20 was the first example we tackled with verification there were some problems we ran into regarding those nuisances of the EVM that I was talking about earlier so this is how we tackled those problems first we had to define a bunch of state abstractions directly in K itself it was nice that we could do it directly in K what was not nice was how you know long it took to develop them but now we have them and now anyone can use them so two examples of that that are particularly useful is this nth byte of abstraction and basically this lets us not actually chop up 256 bit words when we store it into memory we just kind of hold the 256 bit word itself so we don't have to chop it up into a bunch of different symbolic expressions but then if an individual byte is accessed from that 256 bit words we kind of return an expression that says you know this is the byte that you mean but otherwise we just hold it in memory so we we can kind of optimize for the very common case where you're not actually offsetting and reading an offset of the thing but then still do the correct thing in the case when you do the offset so this reduces kind of the size of the symbolic expressions that we use and then we also have this ABI call data abstraction which lets you specify instead of saying you know I want to call into this I want to do a verification verify a contract where I'm calling into it with this call data which is just a raw byte string and you know you have to kind of craft it by hand which would be really annoying instead you can use directly the function name and the signature and directly the typed arguments and it will kind of desugar that into exactly the bytes that you need to make that call properly using our knowledge of the ABI encoding and then another thing that we had to kind of lean on was we modularized how you can do specifications in K this took a while to develop but now it seems fairly robust but basically we can reuse the same specs for different implementations and this is kind of interesting because you know we then were able to go to we started out with an ERC-20 implemented in Solidity which is a high level programming language and you know spent a bunch of time verifying it and then we were like okay let's try Viper1 which is a different high level language and it generated different EVM byte code and it turns out they had different behaviors go figure because they have different compilers and they have different people writing the code so that was kind of interesting so we now have five different implementations of the ERC-20 verified all with their own different behaviors we just kind of stopped at five because we were like okay it's time to move on to other contracts that are more useful but you can kind of look at that and basically they all can reuse parts of the specification that are the same as Delta for the parts that aren't yeah so these are a lot of the kind of efforts we put in I mean there's more ongoing efforts into making using the verification engine easier but you know at the end of the day if you're doing full formal verification of your contracts it's going to be difficult and that concludes my talk as well so thanks for listening this is high level conclusion and here are some sponsors that have helped us with this project including IOHK who gave a very generous gift to both the university and the company for advancing this work