 I work at optimism. I focus on security in our protocol. I previously was an auditor and co-founded Diligence with Gonzalo and have had the good fortune of working as a client with these three gentlemen and not yet Sertorra but have had an opportunity to get to know them really well. So I really feel in my element and I'm excited to have this conversation. So I'll let them introduce themselves quickly. If you could you know give us your names and tell us about your firms and what makes you special. I can go first. Hey guys. And thank you Rajiv. Yeah if you're watching. Yeah we co-founded Consensus Diligence almost six years ago now or going to that me and John who's now at Optisanism unfortunately and Consensus Diligence is a crypto-native security firm that it has for the entirety of its life done security services and has also a product side which might have seen so Mythex and more recently Diligence fuzzing and a bunch of other smaller tools that are all open source that you can check on our website. Hi everyone my name is Chandra and I am from Sertorra. Sertorra is a automated auditing company. You can think of it as that way. What we do is we use formal verification techniques to prove certain properties about your smart contracts and I'm personally more on the R&D side and one of the things that we are focusing on these days is how do you have more trust on the specifications that you write because at the end of the day that's really what we are using as the ground truth for verifying your smart contracts. So if you have any questions about formal verification I would love to hear about those. Hi everyone my name is Mehdi co-founder and director of Sigma Prime. We're in information security consultancy focused on blockchain technology doing a lot of smart contract auditing as you can imagine but very comfortable diving deeper into other layers of the stack. As John mentioned layer 2 systems, new L1s and whatnot. We're also the founders and maintainers of Lighthouse, the Rust implementation of the Ethereum consensus protocol. Some of you here I'm sure are running validators, some of you may be running Lighthouse as well. We had the merge a few weeks ago, massive milestone for everyone, massive milestone for Sigma Prime and yeah. Hi everybody I'm Jonathan Alexander, I am CTO at Open Zeppelin. Open Zeppelin we are a security advisor, security auditor, we also work on Open Zeppelin contracts, Open Zeppelin defender and we are core development contributors to Florida. And hi everybody I'm Nick Selby, I'm with Trail of Bits. We are an audit and research company based in New York but people are all around the world. In addition to audits we have a research group that mainly works on just sort of pushing the edges of a bunch of different technologies including blockchain and where things going next. And we make some pretty popular open source tools like Slither and a Kidna which probably several of you are you know familiar with. Thanks I'm glad to be here. Awesome thanks everyone. So first question I think is like maybe a bit of a softball just to get us going. And I'll I'll pose this to Mehdi first and then just let the conversation get going. So it's been three years since the last DevCon. What do you see as the biggest changes in that time? Maybe it's something you're excited about, something you're worried about. We'd love to hear it. Yeah interesting. Let's see I think what I've been noticing is the quality of the projects that we've been reviewing has somewhat improved over the past three years. There's caveats obviously. We still get the occasional you know DeFi projects with you know lack of access control and the typical unsafe external calls and re-entrances but I'd like to think that thanks to some of the tooling that's been available, Trail of Bits been doing great work on that front. Developers have a lot of abilities these days to pick up the low-hanging fruits. Contracts are getting more and more complex in the DeFi space particularly. Some of the lending protocols that we've been reviewing are pretty hectic but yeah I think you know the amount of money getting stolen on a weekly basis these days is probably greater than what it was three years ago I'd say but it's probably due to the fact that there's just so much more so many more people getting into the field. A lot of new projects being deployed and obviously you know our time is you know constrained we can't unfortunately service everyone all the time but yeah so goods and bads I guess. We've seen a lot of projects come and go too and a lot of money come and go so yeah I think that even I fully agree with Maddie the quality has definitely improved and I think the bear market kind of helped at that. Okay anybody have something different to say than better quality? I have a mixed bag. All right. So like three years ago it was a simpler time. Things were a lot easier to look at. Things were a lot more self-contained and as you just said right that the complexity the maze of dependencies the complexity has kind of exploded at the same time that there's more and more value so the stakes are much much higher. I think on the other hand you're right the tooling has gotten a lot better but it's also I think all of us here working with firms that are we're getting used to the risks we're used to the threats we're inured to the patterns we can see them a lot more clearly. Everybody here has a lot more experience in finding things and on the other other hand three years ago I remember being describing to board members of a of a pretty large crypto company about nation-state threats specifically talking about North Korea and they kind of looked at me with blank stares and they were like well you know so what and I think that over the past three years that that nation-state involvement has gone from a kind of a risk to a definite and definable threat. Chandra. So well this is my first DevCon so I wasn't there in the the first the the one three years ago but what I've noticed is that there seems to this is a very fast moving field and there's like new terms new buzzwords new concepts that keep getting thrown around a lot but I think you know the main thing that I would you know say here is I think it's important to remember that at the end of the day like companies like Sertura or like you know formal verification companies or like any company that is building tools for verifying smart contracts they actually rely on a very fundamental concept and I think that even though this field is very fast moving the fundamentals are still the same and you know those ideas go really far and I'm actually excited I think that we should be able to scale some of these techniques pretty well to some of the more recent concepts that are coming around in this field. Nice okay I'm gonna I'll give you the next question but let's we'll keep it moving so yeah so we're done with softballs Jonathan what responsibility do auditors bear when code they've reviewed has been exploited thanks for you know if that's targeted you take a sip of water yeah yeah well I think first of all I think all of us because we I think everybody up here we take our work so seriously that the first order of responsibility of course is dealing with the issue that's found and I think that I know we had open Zeppelin make every effort if we've had situations I'm not gonna say we've had situations where things we've missed have been found but customers we work with maybe have a situation arise and you are a trusted security advisor to them you're gonna rally to help them in the situation and that may be in discussing what are the potential fixes and reviewing fixes and even talking about public communications and that is a I think if you are with a trusted you know we view these relationships as partnerships then that's what what we're gonna try to do I think that in the case in a specific case where something is missed in an audit which sometimes like we've had some situations where we'll do an audit and then before thankfully pre-release an issue is discovered that was missed in the audit maybe because there was another you know set of eyes on it or maybe there was internal testing and in that case we always view our responsibility is to do a very thorough postmortem on what happened exactly what was found have our team analyze itself as well as get other people in our team and other get the customer involved so that we can all learn from that and figure out and explain to ourselves and improve our processes going forward so I think that's the very common approach so so we've heard from Jonathan that it's I think this is just like like to the preface on the same topic but Jonathan has been open that the top tier auditor can miss something and so you're lucky right like it got caught before it hit production that can happen to anyone so let's like put that out there as anybody on the stage like actually had something they've audited be exploited in a significant way I'm curious not really but we've had stuff being disclosed after it's in production yeah like I think that like we establishing and we've said this many many times we're not insurance companies right so that's the baseline but there's an ethical and moral code that like that we uphold to and so if something happens obviously this is also a case-by-case on a case-by-case basis it should be analyzed differently because if it's if it's our if the problem was created by us specifically then we might react one way whether if if the if there it was just like a miss we might react some other way but at the end of the day we'll try our best but we're not obligated to anything right nor can we be because that actually destroys the entire relationship well I'll go because I'm just gonna jump onto it we we actually don't I don't think that we have a different way I like what you said about the moral responsibility I think it's really it's really important I don't think that we have a different process we have a we have a pretty stringent process to determine whether whether something was missed but but much more important is what happened and how can we help and how can we reach out proactively they're gonna be an incident response mode they're gonna be their hairs on fire there's gonna be media involvement there they're in a world of hurt a lot of us at Trail of Bits have worked in the incident response field so we're we're very accustomed to understanding what they need to do right now to stop the burning to stop the leaking and and get things done so our first reaction is to to refresh ourselves as to what we know about this project and then reach out to try to offer as much assistance as we can I'm very very pleased to say that the entire wrecked leaderboard does not have our name on it today that that will change but you know as of right now but but this is a process that we take very seriously because it can happen there are no guarantees in audits we we always do our very best but I think to stick with the customer when they're in a bad time as well as when they're in a good time is the most important thing you want sorry I'm at it Jenna just give us like an insight into what that process looks like like just one I didn't hear you actually the process of choosing whether to help the customer or not in in one of those situations so the the helping the customer I mean I let me be clear because I made this same mistake in Stanford I don't mean free necessarily right we will help obviously for nothing there's certain things that we can do as auditors and as experts in in the field where if there's more services required if it's fixed reviews if there's other things but the first thing we want to do is reach out and say hey we saw that this happened do you need any do you know do you want to talk to us about stuff that you're that you're doing right now you said that pretty well can we agree that audits is probably the one of the worst terms to describe what we actually do do we all look at yeah okay cool so like cyber we just inherited I know I know it's terrible I feel so bad using the word auditing or audit every time I do but these are really time boxed security assessments let's just all agree on this as John was saying these things are not bulletproof right bugs will be missed and we allocate a certain amount of time to do our best to find these vulnerabilities in your code but guess what if you give us double the amount of time chances are we'll probably find more bugs so let's just you know make sure that we're all clear on what we do up here in terms of security reviews and security assessments okay by the way that's that's not just a CYA it's not you you said the word partnership or you said the word partnership people wait a long time for audits it's a very important part of the process of going live with with a product especially a product where a lot of money you know if it goes bad a lot of money can go out the door customers have their own priorities customers have budgets customers have realities and usually it ends up that we're not negotiating actually about money but about time we're not negotiating about money but about the number of engineers who can be looking at it so while we can't guarantee that a three-week two engineer audit will find all the bugs we can guarantee that a three-week two engineer audit will find all the bugs that can be found by three engineers by two engineers working for three weeks and that that's a different sort of guarantee it yeah okay so I think I think many said this up well in that by saying like like just the semantics of what this is what we're doing how is defined so we're saying time box security reviews now do we need do we need a set of standards to make it clear whatever word we choose do we need to have a set of standards for this and I'll give it to Chandra to take crack at it so standards for auditors you mean for yeah I mean one that's a good place to take it as well if you like I was thinking more for what an audit is for what what needs to happen during the course of this yeah review well so you know I can perhaps speak from a more automated auditing point of view but then I think some of the ideas probably also apply to human based auditing I think the hardest thing is to come up with what like the specification right like even when you're auditing like what are you actually looking for I think that's one of the main questions that you should very carefully think about right and you know I think one of the things that's kind of difficult for auditing is it's hard to be very precise and if you're not precise then you kind of think that you did a proof or like you have some kind of guarantee that this bug isn't there but I think you know unless it's very easy to feel like you you know audited really well if you don't have the specification correctly in your head and perhaps a related thing is in formal verification there's this notion of trusted computing base and I think that idea also you know maps to auditing like what part of the code did you audit what parts of the code did you not audit right and you know you can only say confidently well with some confidence that the code that you did audit maybe doesn't have some vulnerabilities but the parts that you didn't you can't say anything about them right so it's very important to be very explicit about what the TCB is and what you verified and what you checked or whatever you audited anyone else I would say I'm a little frightened of the word standard in what we do because I think sometimes it comes from the place of people hoping that we're going to simplify down to some very simple set of rules what needs to be covered and then anyone could cover it and so I do think we need common language we need and many just helped us with some of our common language and understanding so that we're working together but I think what's important is if you're if you're a team and you're engaging auditors it as opposed to you want to have common language about what will be done what won't be done but this is people also and people with experience so I think getting to know who are the actual auditors and not just I think we were talking about this before not just the team or the you know the label but who are the specific people what is their experience because those are the people you're going to be working with and and what's more important probably than standards is agreement with who who is going to provide you the security review and who are those people what are their skills and what are their what's their experience yeah so I'm gonna like that I'm gonna go to the other side of that the interpretation of the question of standards and well I'm really I'm gonna just steal the phrasing from Rickard who I was talking to the other day he said who odds the auditors and so you know as a project and I experience this now as somebody who's audited before and understands the industry far better than most still scary like talking to like trying to evaluate the auditor like for the code that I have that I want looked at does this odd is this the right audit firm and do they have the right people available when it's there's so many moving parts it's crazy but back to that who are the others like is there some way that we can that we can or should be holding it I don't want to go back to like holding ours accountable and the skin in the game thing that we can go around round about but yeah like how can how can we help non-auditors evaluate the others that they're working with should there be some kind of third party watchdog that's ranking you all like there's a I think it's a place like this a good plug here would be like to talk about its trust and I think we all participated in the standard in one form or another and I think it's like a dangerous proposition because what the proposition with trust is that there's levels of how secure your code base is there's like kind of a risk assessment of how your code base will look like after you have done certain things right so so this this this is part of the EA the Enterprise Ethereum Alliance and I think it was Tom Lindeman who started it I think it is a dangerous proposition but it might actually be helpful if you are not super embedded into the security industry right again it has it is useful but it is but it should be taken with a grain of salt because like if you take level three or like formal verification as a panacea that will probably be a disservice to you so I'm mixed on getting standards for what a good security practice looks like all right this is a scary one so but I'll go ahead I actually think in terms of security work there and and I think about other niches of technology the work that let's say these teams here do is very public it's very public so in some sense we're watched yeah very closely and so this idea that so our reputations are on the line all the time all the time and honestly I know all of us up here we lose sleep over it right like and and I will say too that like it opens up when we work and opens up on contracts and we've had issues and you know so we're our reputations are very public and we're all glad we're not on the rec leaderboard and so we can talk about I think it's perfectly healthy to have conversations about how could we provide more transparency and more info but I think we are actually very well watched right now okay yeah now there still is the thing about what's the right match for you as a project and I think that's an important consideration yeah yeah and I like how you said like you lose sleep over it I think you know everybody in this room loses sleep about either the code that they're running or the code that they're auditing so we should all go easier on ourselves um so I just want like I think it'd be fun to get some questions from the audience so if you have a question raise your hand and meanwhile while the mics are getting handed out we're gonna try something overrated underrated lightning round no people no firms these are concepts okay so where should we start Nick plain English specs overrated or underrated underrated twice it I was actually just on feist is here is ahead of our blockchain and he was saying you know you can pick the tool you can you can pick how it is that you want if you're not able to put down in English what what are the invariance what are the things that you're trying to accomplish here you're you know once you do that you're 75 or 80% of the way through it also it just speaks to a number of things not just for security review but also just for your own understanding yeah to be able to declare this is what this thing does yeah you can't put a value on that yeah a Gonzalo overrated or underrated contract upgradability okay expanding upgradability is a bug not always we went through this so there's a there's a longer conversation about this that happened at the five security summit I think that is available on YouTube so I could rent on and on about this there's also a bunch of articles on the not on the internet about this but generally speaking it you are just injecting another attack factor in your code base okay all right Chandra how do you feel about sorry overrated or underrated reading the code very carefully it's really underrated I think we should read code so we can't just write rules and you can't just write rules no you definitely need to understand the code before you start writing your rules for my verification is not a silver bullet it's definitely not a silver bullet really but really like how you know like it's how do you know what kind of guarantees you're getting from formal verification I mean yeah I think this is a very good question right like you this kind of goes back to what I was saying before you know it's important to think carefully about what you're proving right and your spec is really all you have and you know when you say that okay we have a formal guarantee you should be very careful when you make these kinds of claims and I think it's very dangerous to make these claims because you know the spec could itself be wrong and even if the spec is reasonable I mean there could be a bug in the formal verification tool and like then that's you know that's also bad and you know just because there is a specific bug that you verify verify your code against does not mean that there's you know it's just bulletproof and you can just trust it with your life right so yeah okay Maddie yeah you're up have I asked everybody something you're okay so you're after Maddie you'll get something don't worry Maddie alternate virtual machine design so not the EVM overrated or underrated overrated I don't know I don't know I've been spending a lot of time in on the EVM so yeah I'm getting familiar and you know it's got its quirks of Stockholm syndrome probably yeah probably yeah look now we've definitely done a bunch of security assessments targeting different execution environments don't get me wrong but obviously most developers are operating with slidity viper at the EVM level it's got its quirks but it's kind of working I'd be excited to see the upcoming upgrades particularly EOF EVM object files that might change a lot of things with regards to security guarantees or assumptions that we make with the EVM but yeah I don't know quite like it by now all right Jonathan how do you feel about like insurance against smart contract vulnerabilities overrated underrated I guess I mean if I probably say I'm gonna say underrated because I don't really think it's widely adopted and used at this point and I think it's a it's a it's based on where we're at as you know our risk tolerance is really high and our you know our skin in the game is really high and and and our risk levels are not well understood so it's hard to get insurance yeah if not impossible for a lot of us so I think that'll change that's gonna change for sure and so that's why I say underrated I think it will change and then yeah there'll be hedges and insurance and I mean it is happening it's not like it isn't happening but I think it'll it will happen more and hopefully you know some of the work that some of us do up here will help everybody understand the risk better so that insurance will be obtainable and things like that for those who want it yeah I do I do like I will give one I haven't given many opinions I don't think but I think that having somebody try to price risk who is not an auditor auditors don't want to price risk they like look for bugs right but and it's a totally different thing to be in an actuary spoke there should be actuaries and maybe they are the ones maybe they audit the auditors they have an incentive to do and we're working what I'm what I'm hoping and and we're doing some work with with the actuaries and and with the insurance companies to make sure that insurance in this industry does not go the route of insurance in quote cyber security where I mean I thought it was quite hilarious what a week ago when Lloyds of London got hit with with a site and like they can't set actuarial terms if they can't even secure their own systems yeah what what we're really hoping for is that that people doing security reviews can set those standards so that that the insurance companies are actually as passionate about what they do that they're getting into it not just because oh look at that it's a new adjacent market that we can get into and much more around wow this is really innovative stuff we should get in there to make it safer and we're working with several people like that and I'm very hopeful so I'm with you about underrated interesting okay is the audience ready does anybody have a mic and a question yeah I do all right let's just Mike if you I can hear you so if I'll repeat it oh somebody's somebody's set up okay go ahead please hey so I was wondering because of all the monitoring tools that are coming out like photo for example and there is also private mempools is it really overrated hmm oh it's you you keep I like you're using the format what's the question is is mom it's about monitoring is monitoring like mempool overrated or underrated specifically mempool monitoring I think is that is your question monitoring in general is monitoring over yeah yeah so for for example like there are products out there that claim that they could like maybe detect hacks and all that or maybe secure your smart contracts that kind of thing but a hacker could use like a private mempool to like really send these transactions across and to even to react right you your transaction maybe I'm gonna say let's talk after not sure it's real but what I'll say is that I don't think there's anybody claiming that we've solved monitoring in the ecosystem nobody's claiming that but that we can stop threats but I what I will say is that I think we're all I think there's a lot of agreement that security is not just a time box security review if you if that's all you do good luck and and there are things you should be doing upstream and considering like formal verification and fuzzing and various techniques so time box security review part is part of it and there's things you should do so pre-release pre-deployment post-deployment there's things you should do and monitoring is part of it and so there's value to an end-to-end security and I think that's where we're starting to appreciate that and I want to sneak in quickly while we move the mic over yeah go ahead Alex you're up next don't worry hey do your clients request public reports and when you write reports who do you write the report for yes so I was telling one of my team members the other day that what we sell effectively is PDFs obviously joking a bit but yes yes it's critical the output of the work that we do is a security assessment report which you know contains your standard executive summary detailed description of all the vulnerabilities identified and recommendations who do we write that for that's an excellent question we write that for our clients now the problem that we may see quite often in our space is you know readers sort of going through all users of the protocols that we review going through our reports and not necessarily understanding again the concept of a time box security assessment in a lot of cases the scope that we actually go through is a limited part of the entire protocol this happens quite often particularly with very very complex systems and unfortunately some users may use our security assessment reports and say look this protocol is absolutely safe because this auditing firm has only found a couple of you know mediums and lows and informationals and they've all been resolved because they passed your audit right exactly that's right they passed the audit big check mark none of us here provide you know any big green check mark in our auditing reports there's plenty of other firms that do that unfortunately so yeah I think I think you're absolutely right the audience of these these security assessment reports is is a tricky one let me sneak another question in if a client asks you to remove something so or rephrase something are you writing it for the client yeah this happens quite often I'm sure I'm sure that like it happens quite often yeah I'll take it no no no no I mean I mean a client can argue on the severity of a bug right and that's absolutely fine you know we're more than happy to have that conversation and be like okay why do you think this is not a critical like look I've got systems off-chain in place to prevent this from happening that I would actually drop down the likelihood of exploitation I'm like okay fair I'm happy to drop down the likelihood which would effectively impact the severity of the bug but removing a bug from a report if it's valid hell no now this goes against the trusted advisor role which we play and and how to read our reports or I think all of our reports is that it isn't that check mark and what I want to see from a customer is I want to see them come out with our original report which we will not change and so you as as the public can look at the report and see the problems that they faced and then you can look at the fix review report which is separate or an appendix it's just it's it's different and see okay you can actually watch the progression as they're taking the advice they're understanding not just how to fix that bug but how to fix bugs like that from happening at all and and some companies are afraid to do that because people will think that they're not good at security I look at is the exact opposite a company that is proud to say look these are the challenges we faced and here are the innovative solutions that we came up with to address them that's where I want to look because now they're they're actually taking our advice and they're and they're doing what they should do with an audience is there I want to like is Chandra is there a difference in the way that like a report needs to be presented when it's like heavy on formal verification I mean I don't think so I mean we have very similar reports and like Mehdi said right like we have the severity of the bugs and we explain exactly what property we wrote and why that corresponds to the bug and I think the only additional thing that we very much emphasize is that you know like this like be careful that don't just take this as like the you know this is not a silver bullet and like the yeah I think there's no difference to be honest in terms of the reports we write okay Mike thank you for waiting so disclosure I work with Jonathan open Zeppelin but I didn't tell about the question ahead of time so there's no front-running here if you could wave your hand and every single developer in the space but most especially your clients just automatically fall to best practice without you having to tell them ever again what would it be yeah yeah it was a bit echoey but if you could magically like if you could be sure that every client if there's one best practice that you wish every client would would adhere to what would it be whose field yeah all right I've got one do not change commits meet the reviews please thank you should I should I start formal verification at the very beginning and continuously throughout the project or only at the very end that's such a good question so the question is do you want should you start formal verification towards the end or do towards the beginning of your development so this is a very interesting question you should start as early as you can that's based on you know our experience I think that's the right place to start thinking about formal verification I think the most important thing there is you know you it really helps you sort of write your code in the most simple modular way because that's really what is best suited for formal verification and you know it just it automatically just prevents you from making certain mistakes and I think it's extremely important even if you are you know no matter what kind of verification technique you're using the earlier you start the better and along the same lines you know writing your specs out in English and also like you know gradually making them somewhat more formal I think the earlier you do it the better like once you've already implemented your code trying to come up with an invariant or like trying to write down the pre and post conditions and stuff it's it gets really hard so I think there's like I like the shift left security meme which is like it basically just the idea if you think that your time and your process is moving from left to right writing writing your code then writing your specs and test is doing it on the right-hand side so the more that you can do any of these things your specs formal verification test writing earlier in the process generally the better your outcome will be absolutely yeah yeah yeah so I have a question for you I think Jonathan and Nick were touching upon it but crypto is a very narrative driven industry in general and if we zoom out a little bit and get outside of the security eco chambers what are some narratives that you want to see and what are some narratives you want to correct that are floating out right now in terms of security did you catch I'll try to so you said you said it was this is a very we're in an early stage right now we're in an echo chamber of security conversation and we need like I'm struggling yeah I was gonna say if we make an assumption that security community within crypto is is on the on the table and there was a broader developer community outside as well who are developing building the tools who are susceptible to certain narratives what are some narratives that are floating around around secured around how people see security audits around how people see security in general that you want to take the chance to like correct if you see anything wrong what does some way I think I alright so I'll just repeat it for everyone what I heard and correct me if I misheard bad narratives in our industry that should be corrected okay hopefully you know quickly and early on so anybody that's all I got that where are you angry about I'm angry about a lot of the raises that happen in the security industry recently I think they that money and capitalism is creating legitimacy for for people that have not demonstrated proficiency so this is a narrative that really grinds my gears for example and yeah yeah yeah yeah I'll give someone else that I'll sneak in a smart contract to a probability here alright since Gonzalo started with things that make us angry I'll do one too that well I think that the our desire for cost efficiency and time efficiency is not in our best interests and I mean gas savings and speed of transfer from a layer two to a layer one through the bridge that was just developed yesterday like and our lack of acceptance on latency on anything so that we could do more security verification so I think these things work against us and I understand why we're doing them and how they're helping initially but I hope the narrative changes so that we aren't as worried about gas savings and we're more and we aren't as worried about speed and we're a little more worried about security and safety for the users hello everyone my question is is if auditing is good enough or would you consider like alternatives like open arena or bounty vaults like hats finance for example or maybe deploying two medium tier auditors or just one top tier auditor per contract consider in timeline so I'm not sure I fully understood the question but I think it's please correct me if I'm wrong but I think you asked if auditing is good enough then why do we need the bounties let's make it more general I like we sort of I think I think I'll get we will get this is like like just generally we have some new alternatives to audits that have emerged giant bounties and and like Coderina's and Sherlock doing these like auditing competitions they're a little bit like a time box bounty program so how does that how does that like substitute or compliment auditing I don't know if anyone's like we might yeah yeah my answer is is back to what I said a little bit before which is a bounty can give you that this bug which is interesting but it's not really as interesting as as how you are working with this is a this is a holistic view of all of the code by by engineers who are deeply and intimately involved with the code to the same extent that that your internal people are but they're seeing it from it through a different lens they're seeing it through the lens of people who have literally watched everything that could go wrong go wrong and being able to see mistakes in everything from documentation to just your your overall maturity it's a different set of things if you want if you want to find specific bugs and fix those that's great there's other things that are that are outside the the general scope of what we do in a security review I'm not suggesting that that audit or security review is the be all and end all but it's also different from a lot of things that I'm seeing out there and as far as automation goes we use automation to find sort of low hanging fruit or to point us in the right direction so that humans can look at it and make educated decisions about what we think and there's a lot more than just run this tool see what you get and hand over the report anyone else quickly we're running low on time sorry no no no sorry sorry no no that's I meant on the stage like if anybody wanted to alright I'm gonna be selfish too there's something I but in the time we have I really want to plant this idea as much as possible is that it's not the auditor's job they're not selling you security in my opinion they're selling you like insight into your ability to write secure code so you need to take responsibility for writing secure code and not act as if like you just get to write code that does cool stuff and then give it to the auditor and they will make it safe for you so you need to focus on being able to write secure code whatever process you think that might need whatever tool you think that requires and then you give the other your code so that you can understand how close you are to being able to write secure code amen it took me a long time to get there so it's a bit of an epiphany I'm trying to unfortunately we see what John was describing way too often and yeah it'd be great if we can all collectively shift towards a different yeah definition of what our work here means yeah thank you thank you okay there was another question so let's have it yeah so foundry when foundry first came out I was surprised by the fact that it included fuzzing so my question is when does fuzzing make sense on these code bases which are generally open source and small and how do you approach fuzzing in your use for auditing fuzzing makes a lot of sense all of the time it's we going to the like the shift left shift security left narrative and all of that like you should use all the tools literally you should write specs if you have specs fuzzing will work better right so yeah you should just do it all the time use foundry and use all the other tools use a kidney use Harvey by the way forgot to say I've been announcing this the entire week but this is the panel we intelligence will be open sourcing all our tools just like drill bits we are a bit behind schedule but at least very late than ever so even Harvey yeah yeah yeah all the code bases so yeah did you I just yeah I guess I to add to what Gonzalo said I think using tools is actually good so fuzzing makes sense to me I mean you know again it's not a guarantee but you know I think it's just like good software engineering right like you write code and you want to write tests and you want to do fuzzing and you want to write specs and these are all good software engineering practices and I think they also just are they carry to this domain as well so make sense the one the one comment about that you just made well both of you like that the declaration of things the ability to set forth a spec and then fuzz around that that's really important that first part is what just a lot and Gustavo are going to be doing tomorrow there's going to be a two-hour workshop on how do you declare these things how do you actually set out and describe invariance and how how can you use those to make your fuzzing more effective so I would really recommend that that you go to that so just be fair I know like we're a time by this timer let's give every speaker like 30 seconds to sorry alright so we started the questions way early we've got tons of time I should have read the manual did Brian I know Brian has hand up at one point and okay you please just yet just like I'll repeat it through the mic so go ahead that's that's cool is there a way to make like the the output of of audits machine readable and it's interesting because we we do actually that's later right yeah yeah I was gonna go there I was gonna go that that's really interesting and I think we're gonna take that up and talk to us hit us up afterwards I think you're gonna really depend on the writing style of the security firm that you engage with but it is something I'd like to think about a bit more yeah perhaps we could parse some of those reports into a readable format or a machine readable format that shouldn't be too hard to do we rather report in reports in latex so should be pretty trivial for us to do so at Surtra actually we do have you have been working on that related thing so I don't know if you've ever heard like there's things like deoxygen or doxygen I don't know how to pronounce it and like there's also Nat spec is another similar library so the nice thing about you know if you are if you have a formal spec it's actually a little bit easier to like sort of convert that to a format because you can annotate your code and then you can automatically generate some kind of a formatted report from from that so I think it's a good idea hey guys Kevin over here hey friends so haven't come from sort of the security side and now being protocol engineer and founder I'm wondering about this this line you tread as an author of being very slow methodical you know heavy testing core protocol must be super secure to now okay now I'm wanting to move a bit faster maybe their integration contracts or something that's not in my core protocol you know how much time am I spending in testing in and auditing how much money am I spending there versus move fast and see if things work I wonder if there's I have my own opinion but interested to hear what's on the stage I'll just quickly I'll say the most secure code is the code that never hits mainnet so just keep testing I also think it depends on how much money you raised or how much money you have I know like it's very it's a very practical thing you know like you can spend a lot of time and resources doing doing writing tests but at the end of the day if you run out of money and you only have the tests you're probably fucked okay cool it might come off as somewhat pedantic but I'm curious looking at the different way that auditing firms do severity or impact and then difficulty slash likelihood I'll be honest I like when I preferred likelihood so that it's like difficulty is low when likelihood is high so I was wondering you know high likelihood and high impact would equal high severity have you seen clients get confused about this different and without being overly like strict as the industry is still developing I'm also curious about like how we might classify things trail of bits has a nice classification SWC seems like outdated and not updated as well as or maybe it could be going forward are there things that we can do as an industry to improve and start to I hate to say solidify but structure our information architecture a little bit more so that clients can understand when they're getting multiple audit reports and reduce confusion I think I think it's yeah it's a great point as you said well for us at for us at CPP high likelihood high impact critical and I think we all use perhaps similar types of risk rating matrices it's probably beneficial if you could standardize on on how we would rate severity so as you said it's typically likelihood of exploitation impact of exploitation but there's different levels so we only have we got four of them at CPP so informational low medium high and critical that's actually five but yeah I think I think it's a great idea there's been a lot of initiatives over the past few years on the ETH security channel to try to standardize some of this stuff it's never happened I don't I don't really know why but I think it would be quite beneficial I'm gonna give you a bit of a left field question so there's been a few regulatory proposals that includes software security best practices in the proposal what sorts of standards that are driven by regulations could be do you think could be damaging to the space and what ones do you think could be good like so so almost like yeah the typical like regular comes out of nowhere and they're maybe a bit naive what kind of damage could they perhaps do by misunderstanding well I think I don't know if it goes to like the intent of your question having to do with structure of code I think we all know we're concerned about regulations that would impose requirements on protocols to be able to block certain parties as an example so I think the biggest are a lot of our concerns around regulations that would have requirements I don't know if that goes to your thought about the question having to do with specific developer processes your contracts must be upgradeable for whatever reason those things are always damaging if you're talking about regulations at the code level I think there's no way they can be good you know like I think that as an example having worked in security before and in the in the cloud space the sock to standard and actually I believe it has a lot of really good aspects to the software development the secure security focused processes of your software development lifecycle and I think those are good practices and if teams were transparent about the processes they follow I think that's healthy and if held to a certain standard all those things always have downsides in terms of the effort that goes into auditing them and proving them the lifecycle of regulation is really tough getting getting regulators and government officials to understand the key areas they have a goal of what they want to accomplish but usually they're they're rather far behind the technology I think it's up to us everybody here we certainly do this and I bet everybody else up here does this spent a lot of time talking to regulators about what is possible and what isn't we spent a lot of time talking about the intent and trying to come up with sort of more more open suggestions to industry as opposed to specific because if you say thou shalt do the following it's always going to be out of date by the time it passes and it just becomes very very difficult so I think that the best thing that we can do the most practical thing we can do is educate those those people who would regulate us thanks for clarifying Brian yeah that makes a lot more sense just to give you guys maybe an exotic perspective I was catching up with some of the regulators in Australia a few weeks ago and this is exactly what they were asking for they said what should we be you know requesting from projects to do in terms of security practices in terms of security development secure development and you know what should we be regulating in the space so I try to sort of convince them to go down the educational part and and yeah this is this is spot on I think it's certainly something that is happening at least in Australia these people are starting these conversations and are starting to realize the need of you can't enforce these things on projects but at least you can have these set of guidelines and point your users your you know citizens to these guidelines and get them to demand these standards the set of minimum requirements from the projects to interact with thank you