 So, I'm here to talk about what we're calling the QuantStamp Assurance Protocol. I guess I should introduce myself too. Yes, I'm Steven Stewart. I'm the CTO co-founder of QuantStamp. Prior to that, I'm a computer scientist, and I guess I'm a PhD dropout. I don't like it being it, but it seems to be the cool thing to say. I noticed that Remco from 0x, one profile for him, is his PhD dropout. I prefer to, you know, think I'm going to go back and finish it. But QuantStamp got too busy for me. So last April, I became a PhD dropout. Without studying at the University of Waterloo, working with Professor Derek Rayside, using graphics processors for accelerating the operations of something called a SAT solver. It's a tool used in formally verifying, well, as an example, I mean, there are many applications, but one application in software engineering is for formal verification of software. Of course, the word formal, implying mathematics, mathematical rigor, verification, meaning checking at certain properties holds and being able to prove that they do. And so Richard Ma, myself, we created QuantStamp because I can't keep track of the time. How long was it? Back in 2017. And we had in common an interest in the idea of securing code. Richard's background was in high frequency trading, and he was quite familiar with all the testing and rigor that goes into that. Because if your strategy has a bug, it can have a catastrophic effect. And so the two of us both had this interest, and he also contributed to the Dow. So we have these slides that we always show, but I feel like it's getting dated now. Second here. Everyone remembers the Dow. By the way, these slides are Jan Gorsnes. I've crossed his name out. So I will probably stumble a little bit through these because I don't normally present them. But Jan is a law chain researcher, and he would normally be doing this. So there's lots of motivation for caring about security. I think most people already know about that. And given the situation where once you deploy your code, it's immutable, then as noted here, when you're thinking about contract security, you usually think about pre-deployment options. So you're conventional kind of testing, auditing. So QuantStamp, we actually do code audits of the service. Formal verification, bug bounties. I always have this sort of conflict with the word security. You know, sometimes we're talking about bugs. There can be a bug in your code, but it's not necessarily a security issue. We use these words kind of very loosely at times, even though I think terms like security do require precise definitions. But I've kind of taken a look at some of the literature, and I see a lot of different definitions for computer security. But anyhow, my background is in more informal verification and parallel computing, and it took me some time to get used to the branding of QuantStamp as a security company. Are we doing security or are we doing software engineering? Turns out we're doing security. So my co-founder was right about that. And then there's this notion of continuous security. So begging the question, you deploy your code, how can it be protected against new attacks? Maybe you've already done everything you could possibly do. Your code is just absolutely perfect at that moment in time. Something changes in the future. I don't know, the EVM changes. And someone discovers that you can now do a new re-enter to the exploit. Things like that can happen. So QuantStamp is interested in this idea of continuous security, which is basically internally we call it monitoring, basically observing a smart contract while it's live and alerting, creating alerts if there's unusual behavior. So there's some examples of what could be done there. I'll skip through that because that's not really what I'm looking to talk about. Okay, so security assurance. Assurance, it starts with an A, like quality assurance. This notion of somehow being able to assure, let's say the owner or the stakeholder of a smart contract, if you will, that their code is secure and won't get hacked. And so we have these two actors in the assurance protocol. One is the stakeholder and the other is the security expert. So the stakeholder wants their code to be correct. If something bad happens, they might use funds, digital assets, and the security expert who wants to get paid for her skills, for her expertise, and also has some funds available. So Yon does a nice job of creating these visual slides. So as you can see here, the stakeholder, just as it says, has some amount of money that he or she is willing to compensate for exploited contract. And then we have the security expert who wants to audit the code and get paid for it. So this is the scenario we begin with. The security expert is so confident in his auditing skills that he says, you know what, I'm willing to stake some of my own funds, put my reputation and money on the line, and I'll stake a certain amount to claim that the contract is correct. Why would I do that? And why would somebody, so this is the slide showing, and somebody else went to, so not just a security expert, but anybody in the world, they could just say, yeah, I guess 10 security experts have gone in and said, this contract looks good, maybe I'll go in too. Well, what is going on? We'll see momentarily, but I'm talking about the staking of funds and what would incentivize you to do that while the stakeholder pays for this. So it's kind of like, what would you call it, like a premium kind of payout. So stakeholder pays for the stake periodically, provided that the contract is not exploited, because if it's exploited then the security expert actually didn't, you know, didn't, there's something that he didn't find. And so maybe I lose funds. So that's exactly what it says here. So I'm a stakeholder and I say, okay, my funds or somehow my contract is exploited. Actually more generically, whether it's funds or something else that you might imagine, namely there is some policy or some conditions that are, that do not hold. And as a result, I want to claim these funds. So as it says, from the security expert perspective, these funds are lost if the contract is exploited. Okay, let me clarify. So the security expert and someone else, they're staking funds and they want to do that because they're getting paid by the stakeholder. But as soon as something bad happens to the stakeholder's contract, then the stakeholder claims what the security expert and others have staked. I hope that was clear. And so this timeline simply, it shows a sequence of events that would happen here, contract is deployed, potentially, you'll see it says down here, the QSP network report is published. I actually just realized, I didn't really tell you much about Quantstance. I don't know how much you know, but we're building out the QSP network, which is a way of sourcing compute power to do formal verification or to do code audits, automated code audits. So one potential step here is that an automated audit is done. So it sort of ticks off that checkbox and then you see the top first stage of interviews funds and there's a bunch of details there. Actually it gets quite complicated. We have extremely detailed diagrams on our internal Wiki and too much to go into for this. And I don't know what that slide is for. And so obvious question is, well, what is an exploit? So this question is being posed because we want to know when can the stakeholder claim to have suffered damages and can claim what was staked. So as it says here, it's up to the stakeholder to define the bad behavior. It's up to the stakeholder to wash her back door. So basically the stakeholder at the very beginning, let me see here if we get into that, defines the conditions. I believe we call this policy. And basically this says, so here's for example this function, this is obviously a terminal example, but in the body of this, let's see here, basically it says, oh, if the balance goes to zero, that indicates this has been hacked. I think you're going to probably imagine scenarios where a balance line to zero doesn't mean it was hacked. Maybe there was a deliberate reason line went to zero. This is just a toy example. And actually our team is working on creating a set of predefined policies that people can use. You can probably imagine it can get fairly complicated. I actually was pushing for this idea. Alternatively, you could consult an oracle to verify that some event occurred so that that's been identified as another possibility. I like the idea of being able to actually define the policy as a contract, as is where a contract, and that's defined upfront. And then the security experts who are supposed to be experts reviewing code, they can read the policy. If they're comfortable with it, then they're willing to stake their own funds. You can also imagine, maybe I'm really clever and I somehow write this obfuscated policy, make it really complicated. Actually, I have my own back door and I'm actually trying to get the funds that are staked. There's lots of scenarios. We've talked through dozens of different scenarios that could happen. But I've always said, if you agree to the policy, you agree to the policy. Actually, when I say, in the company of Fire Beware, I kind of push for this. We'll see how this works out. I guess this slide is just emphasizing stakeholder with a smart contract, and the policy, and contract addresses and terms. I apologize. I don't quite know what this slide is, contract addresses and terms. Sorry about that. I don't recall what Yon had in mind there. This is just another high-level representation of the same thing. So it's just a system perspective. We have a lot of these diagrams. So arguably, you'll see that the slide is titled Staking. And arguably, and really what this probably should be just titled Assurance. So this idea of security assurance arguably enables and helps to scale the verification. This could be, we could discuss that, but by providing a marketplace for auditors, so code experts and others to stake collateral as a claim on the contract security, and avoid fraudulent reports. So since checks are automatic and rational actives, won't stake on insecure contracts or policies which can be gained by stakeholders. I think what Yon was getting at here, since checks are automatic and rational actives, won't stake on insecure contracts or policies. So I actually don't remember what he was referring to here for reports, but I think this is about where I'm saying, like I said earlier, you can kind of gain this. You can imagine a stakeholder even gaming it. But a rational actor who studies the code and understands that they're not going to stake collateral. If they know, there's some kind of backdoor or vulnerability. So summary here, basically looping back to the beginning, the idea that post-deployment, we can do some kind of continuous monitoring of a smart contract. I kind of compared this, so a few years ago I worked at a startup company called MacD, and we were using GPUs, in-memory GPU databases to do this really fast millisecond querying of gigabytes of data. Actually, we were talking to big FinTech companies, I'm not sure if I can actually name them. It's been a few years, I probably can, who are interested in fraud detection. So it's kind of similar, basically looking at certain transactions that trigger alarms, certain patterns of behavior that would trigger a security incident. And then the staking, which again is referring to the security assurance idea, just as another post-deployment approach to, not quite securing, but hopefully offering assurance, increasing confidence in the security of that contract. And that's the presentation. Yes? I was wondering, if I was, if this existed, and I saw a very smart expert, if I was watching and I saw a very smart security expert put down, say, one ether on a contract, then could I, a non-security expert, then put down ten ether, and then get paid ten times more than the actual expert? That's a good question. I don't think so, I think the way we've designed this. So there are, I don't have all the details of the protocol. But I'm pretty sure the way there is, it's designed to offer greater benefit to actual security experts and to incentivize them to participate. There's what's called a TCR, a token curated registry, that will maintain the list of, I guess, accredited security experts. But I don't have the exact detailed answer to your question. But I can find it. Is there a minimum staking period, or can the security researcher destroy the stake at every time? Sorry, what was that again? The security expert withdraws his stake every time, or is there a minimum staking period? So it's kind of the withdrawal. There are, there are minimum periods. So actually, we have been developing user interface, I forgot. I still have these to show you. And you'll see things in here. I realize the text is small for you, but so this is defining what we call a pool. So in the stakeholder, so the pool, of course this refers to the pool of state funds for assuring the contract. And then there are all these different details that define how the payment works, the payment periods, all these different things expressed in terms of number of blocks. So there's minimum staking time. Yeah, so there's quite a bit there. So this is supposed to be the stakeholders view. And then there's the staker control as well. And this is a prototype at the moment. So if you're a staker, you can stake your funds, you can withdraw. And you see actually in this, I think the withdrawal state, but it looks like to me it's like faded because at that moment they're not eligible to withdraw. So there are a lot of details that have gone into this. I'm afraid I don't have them all committed to memory, but we do handle that idea. And one more question if you allow, what do you expect the ratio between stake and reward basically to be? So when... Stake and what? Stake and reward. So you get reward when you stake, right? Oh, reward. So what do you think what's the ratio? Like when do you have your money back basically? That's a great question too. And so I don't know the answer. What we're doing is we're running simulations. So Sebastian, one of our senior researchers, he's actually in Munich. I was hoping he could come because he knows this inside out. He's currently building this agent reinforcement learning simulation with an objective function to maximize the end balance for the stakers. And we're developing that right now. And then we're also doing a community, a quant-stamp community-involved test trial of our current prototype. And this is going to help us to get more precise answers to your question. I hope that makes sense. So we're still very much, we're still very much a work in progress. Do we have some more questions? The way you define whether there was an exploit or not, if you try to encode this in smart contracts, it seems like it will be very, very complex smart contract. And I'm not even sure that it's possible to check whether, like, check every situation. But even for every situation that you can think of, you will create a smart contract that will verify. And it seems like this new smart contract which will verify whether there was an exploit or not, it also needs an audit. So this smart contract can be even more complex. Who will audit it? If I'm willing to save my own funds, I'm a security expert, for example. It's not on me. I would say fire viewers, tell our engineers. Yeah, as a security expert, you're right. But as the owner of the product, or user of the product, if you actually do the audit for the users of the product. And these users cannot be, like, sure that this product is safe just because these security experts, they know that they are doing what they want. So security experts are safe because you cannot, like, you cannot find exploit that will allow security experts to lose money. Incentivizing experts to put time, put their own funds and hopefully get paid is a way to create greater confidence. That's why we use the word assurance. Greater confidence in the security of the smart contract. I'm not saying this is perfect. And I obviously, like, the policy could be very complicated. It could be as easy. It could be as simple as if the balance goes below whatever, then, you know, condition that could be that simple or could be very complicated. That's possible as well. And I agree. You would probably want to have professionals are the policy as well. Because maybe you have a nefarious malicious security expert who sees a flaw in the policy and they don't tell you because they actually want to exploit it. Yeah, exactly. Yeah. Yeah. Good question. So do you believe that the system is better than paying security experts directly? Oh, I don't have the data to indicate that. This needs to be, you know, tested and actually used in practice to find it, to get an answer to that question. And also who's supposed to, like, write this smart contract that verifies whether there was an exploit? Is it, who's supposed to use it? Is that supposed to write it or use it? So this complicated contract and who's supposed to write it? Someone who owns the product? Somebody who knows how to write smart contracts. Whoever wrote the smart contract being secured, potentially. I'm not sure, actually, I'm not sure where you're going with the question. Yeah, I mean, whether you provide the service of writing this kind of contract. Who provides the service? We don't, no, we don't provide, we don't offer smart contracts as a service. We don't do that. Okay, thanks. We do, we do everything, but we don't write smart. Perhaps a silly question. One that follows from that is like, could there then not be a flaw in the agreement itself? Yeah, there could be. Yeah. By everywhere. We spend a lot of time thinking about ways we could gain this. There are definitely tricky ways. The policy is a very important part. You obviously have to get it right. But hey, you could use the security assurance protocol to stake on the policy. So this is going to be our product. It is, yeah, it will be, it will be in a test state for the foreseeable future. And we hope to see it become a product that we share with the community. So it won't be, it won't be like that, something that you sell because it's just going to be something dependent? No, yeah, no, it's not a product we'll be selling. It's a, we'll be designed to use our token interacting with our automated audits. So although it doesn't have to be, someone could certainly fork the code and use it in a different way. But it's something that, you know, one of our aims is to build utility for our token for our token holders. It's a very important responsibility that we have. So there are various R&D projects that we work on to that end. This is one of the more mature projects, but still, still early. Anybody else? Yeah, I had one more small question. So imagine a one security expert already denoted and then comes a second and finds a flow there. Do you, do you plan to penalize the first security expert already at that time because they didn't find the flow or you will have to wait until the contract is exploited? Punishing, so punishing, somehow. I'll have to think about that soon. Anybody else? Well, I really appreciate your questions and feel free to reach out if you have any more questions or ideas to help us out. KoanStamp.com. Thanks a lot. Thank you.