 So my name is Arnold Ross for those who don't know me. I'm part of the open technology group at IBM. I've been working in open source and open standards pretty much all my life at this point. And so I've been actually, I'm one of the dinosaurs of hyperledger. I've been involved in hyperledger since the very beginning. I'm a contributor to hyperledger fabric, and I've served on the technical steering committee for all these years, part of it as the chair. And so more recently, actually a year ago, I started working also in another project of the Linux Foundation called OpenSSF, which you may have heard of, which has to do with security. And although I'm not a security expert, it kind of led me to look into more of the security part of the software industry, and especially in the space of open source. And so I thought it would be interesting to kind of have a look at this within hyperledger. So actually a while ago, I brought that up in the technical steering committee, and I said, I think there is best practices that are being developed and promoted by the open source software foundation, OpenSSF. And I think we ought to look into this. And we created a task force. Hey, Brian, welcome. Please have a seat. And so we started looking into this. I said we should have a task force in the TSC. We got a security task force started, and we started looking into the security spectrum associated with blockchain open source in the context of hyperledger. And it was interesting because we ended up having different types of discussion, some of which has to do with how the projects themselves run and operate. And again, some of this has to do with best practices. And another aspect, obviously, is because especially in the space of blockchain, we're very sensitive to security. And so we had several discussions also of different aspects related to security and the blockchain technology. So today, I thought it would be interesting to have different people come in and it's fairly unscripted. They'll all come one after the other for a few minutes and talk about different aspects of the security space for blockchain. So with that further ado, let me call my colleague Angelo. Please, Angelo, join me. So give a quick introduction. Hello, everybody. So I'm Angelo Decaro. I'm a research staff member at IBM Research Zürich. I'm one of the architects of hyperledger fabric. I'm a cryptographer in background and I'm also in the technical engineering committee. Yeah, and currently I'm working on CDBC, Central Bank of Digital Currency. Yep, thank you. And unlike me, he's a true expert in security. So not just one who plays the security expert at times. So Angelo, tell me a little bit. So you've been involved in blockchain for a long time and with that specific focus on cryptography and security and privacy. So what are the things that you actually are working on nowadays that are related to those spots specifically? Yeah, definitely. As I said, I'm working on a central bank digital currency and in order to have a better privacy is one of the key features for central bank digital currency. So you must have a ledger that protects the information about the users who are transacting even against the central banks. So it's very crucial. In order to do that, we have to deploy technologies like zero knowledge, multi-party computation to protect the keys in such a way they are not in a single place. So we can distribute this load to other parties. And I think blockchain gave us the possibility to deploy finally all these technologies to the wide range. And CDBC, I think it's really a core use case for this technology. Zero knowledge, absolutely. And indeed, I want to invite everybody. Wednesday afternoon, we'll have a session, a workshop, and an entire workshop dedicated to CDBC. We will have Bank of France presenting there for their experience with the technology. And we will also have a focus on the Fabric Token SDK that is the one delivering these technologies to Fabric and other blockchains as well. OK, I was going to ask you what space should people look into if they're interested in this. There's Fabric SDK, Token SDK, and... Definitely the Fabric Token SDK is the entry point for all these technologies. It's a Hyperledger Labs, as you know. So it's pretty active in the first 10 most active Hyperledger Labs projects, so definitely. Does that depend on Fabric? Right now, so it started as an SDK for Fabric, but actually now we extended it to be able to deal with other blockchains. So you write once your application and then you can run it against multiple blockchains. So that's the beauty of it. And you don't have to worry about the zero knowledge aspect because it can be very scary at the beginning if you don't understand the technology. So the developer doesn't need to know what zero knowledge is. So the framework will take care of everything. So Fabric is a beast of misnomer in this case because the Fabric Token SDK no longer depends on Fabric. That's what you're saying. And wasn't there the Fabric smart client also involved in this? Yeah, yeah, actually. So definitely that's another... I think everything started exactly because Fabric is a beast, was very complex to deal with. I know because I'm one of the designers. So it was very powerful at the core, but usability-wise was very complicated. So the smart client is another Hyperledger Lab project that wants to bring usability to the client side and we leverage it also for the token application. So the idea is that really that a domain expert, really just a domain expert should be able to write these applications without having to worry about the cryptographic aspects of it, which are really scary even for cryptographers, to be honest, because all these protocols working together are not simple. Not simple to handle. So having a framework that does the right thing, just what you want, yeah. So this is one of the aspects, yeah. And so since you talk about cryptography and unfortunately the term crypto now has been hijacked by all the cryptocurrency, when I talk about crypto sometimes, I really mean cryptography and people say, you're talking about the blockchain thing and I'm like, no, I'm talking about cryptography, which is a bit of an unfortunate thing. But so, you know, detaching yourself from fabric and all this good stuff that's happening there in the space of cryptography, one thing I actually ask and people keep asking is what happens with cryptography when quantum comes into the picture? And this is the first thing I actually asked one of his colleagues when I met her years ago. I was like, what happened with crypto? And she's like, no problem, oh no. What's the deal, what's the situation? Yeah, very good, thanks. Indeed, so as you probably know, NIST finally decided and to be honest, I'm very proud that some of the algorithms that IBM proposed got accepted, so they will become the standards and they will offer post-quantum security. At least for signatures and encryption. So if we translate this to blockchains, it means that we are covered for the signatures, we are covered already for the hash function, we just, for the hash function itself, you know that we just have to choose hash function with a bigger image, so instead of using 256 bits, we use the double and we are good to go already with the current hash functions. That's the belief, signatures as five, but unfortunately that's not really enough for the more advanced applications. Like, we still don't know how to do very efficiently anonymous credentials. This is something that appears in self-solving identity, but it's a lot also in CDBC because another, you want to protect the anonymity of the transactors, right? So the anonymous credential, it's a primitive that guarantees this thing. Zero knowledge itself, so it also can break for with quantum computers, the old-fashioned zero knowledge. So we have schemes that are post-quantum, but they are still not yet there on the efficiency, but research also, IBM, it's moving fast, so we hope to have a replacement also for that part. Is that, are we done really? So apart the cryptographic schemes, there are also other aspects that must be taken in consideration. So, okay, now we have new keys, new algorithms, but these keys must be understood by our stacks, right? Take the TLS SSL stack, the X-Fionine certificate stack, they really, now I have to understand these keys. So we'll have to see first all these updates of these stacks, and then they will come the updates to the blockchain. So luckily, as you know, for Fabric we have a plugability on the cryptographic algorithm. So we started since the beginning with this kind of cryptogility. So it's easy to plug there to change your algorithms with cryptogility, but yeah, the stacks are not yet there, yeah. But do you expect the plugability to actually work? Because no offense, but you know, there are many aspects in Fabric that were designed to be plug-able, and then when we actually came to try to change the pieces, we realized, well, the APIs didn't really cut it, and we had to react. No, no, for that to be honest, because it's very simple, the APIs. Hash function, nothing changes, signature, it's very simple, nothing changes. And so this is all like quantum resistant cryptography, but people also working on quantum crypto, like crypto using, I'm like, you know. Oh, this is, there are communication, there are algorithms to communicate using quantum physics, quantum mechanics directly, yeah, but this is old fashioned. So I think that they already use it in production. Satellite communication you have already using quantum mechanics, so they can, essentially they cannot be tampered with. So you will see, you will recognize someone tries to tamper with the message. So that's, yeah, that's pretty advanced already. So you still have plenty of work to do. Yeah, let's go, let's go. And I'm gonna be out of the job. To be honest, probably you know, of course, that we have also an IBM, a quantum computer at IBM too. Yeah, that's the first. So that's cool, it's a gigantic fridge, but it's beautiful, yeah. All right, well, thank you very much. I'm taking back the microphone. Thanks for your participation. And now I'm gonna call for Dano, if you would, join me. So Dano is one of the maintainers of the Bezu project in Hyperledger. And by the way, they both also TSC members with me. So Dano, welcome. Hello. How are you doing? Good. Good to have you. So you are in a different part of the Hyperledger ecosystem. Why don't you tell us a little bit about what's going on? What are the kind of things that you're thinking about when it comes to security on your side of the house? Yeah, so as Arno mentioned, I'm working on Hyperledger Bezu. And it's a main net Ethereum client. So one of the very distinct things we have from a lot of the other projects at Hyperledger is we need to worry about the smart contract security and the smart contract changes, and the risk factors that go into there for people who would develop on Ethereum main net. The problems are very, very, very diverse and very problematic for some of them. One particular smart contract bug dating back about five years, often known as just the Dow, was there was a bug in the way that contracts were being transferred. They would commit the transfer and then they would call another contract to do some other effects. And what happened is somebody figured out how to make that second call fail. So it wouldn't detect and revert the first transfer. So in effect, they could slowly leak money out of it. And that is the event that really kicked off a lot of the notion of smart contract auditing of these best practices. Like you never give them the money until you have the tasks that you need done first. That's, I mean, that's like sales 101. You give your contractor the money and they disappear, what can you do? And this is kind of the same sorts of situations showing up in smart contracts. So one of the recent developments in the past few weeks is the Enterprise Ethereum Alliance released a spec called ETH Trust. And it's a collection of some of the best practices that they've discovered that a lot of the auditing firms used as a baseline to start with of some of the things you need to look into as you write your smart contracts. Now most of them, it looks like there's 157 different points and some of them, it sounds like a huge number to worry about. But almost all of them are immediately ameliorated if you use the most current version of the compiler. Because a lot of the things are saying, well, if you use version 8.37, don't use this construct because it's bugged and produces wrong code. Which is part of the, in a constantly moving ecosystem, you gotta stay up with your tools and stay current. Now the simple thing might disable why don't you just always use most current tools whenever you compile anything. Which is what I'm used to in the Java world. If there's a bug in the Java VM, you just upgrade your Java VM. But some of the requirements that they have for compatibility with some of the contract libraries you're using with, sometimes they are locked into a specific version of the Solidity compiler. And there's issues with supply chain attacks and provenance, why they need to use a specific version of the contract in its hard to upgrade. So in this specification, they'll at least go through and enumerate if you're using this version of Solidity, you need to look out for this particular problem and this is how you would fix it to make sure that you're not impacted. There's also other higher level issues like the issue I was talking about where you make sure that the reverts are atomic before you're gonna do calls to other items. So you don't accidentally leak out money. And there's all sorts of others. Well standard practices. And there's like three levels, the base level of don't do dumb things. And the higher levels that involve actually getting humans to audit it and look at it and have good standard develop code formatting and linting. And there's just the level of, as a developer some of these security recommendations just make me all Twitter paid it because it's like, oh my goodness, code linting and keeping your code clear is actually a security impact. Now I know how to get my manager to change those things. This is needed for security. This actually is not specific to the blockchain space, right? I mean, this is something OpenSSF talks a lot about. There's a serious lack of education in the space of, you know, safe secure programming. And of course, it's very detrimental to the blockchain space when it's smart contracts. I mean, there are regularly there are attacks, right? I mean, the news is full of news about attacks where people have managed to drain accounts because they exploit some bug, some vulnerability in a smart contract. And so is your opinion that, you know, it seems like, okay, people are recognizing this is a serious problem and they need to bolt down the technology and have people get the proper education and the right tools to really do a better job? Yeah, I think they need the proper education and the right tools. Because you notice all these hacks very, you know, it's the number of repeat attacks versus novel first attacks. There's a lot more repeat attacks. It's not often that you'll see a novel new attack. Probably the most novel one I've seen recently was the Nomad and it was some issue with the way that they were doing their math with their, with their Merkle tree proofs. And it showed up as a regular old code inspections like you didn't check for Null here or it was some very simple thing that they had persuaded themselves wasn't a big deal but it turned out that that was, you know, it was even, they found it in audit and they even ranked it as low. I mean, even the auditor said, yeah, that's just an annoying lint thing but they went through it and that's how the, one of the exploits was gone through. But that's the exception, you know, that's a new attack. That's a novel attack. A lot of these attacks that are coming through, they're just rehashing the old attacks. So that's one of the values that the auditors do is they have the catalog of all these old attacks and they'll run it against your system and make sure it doesn't work. I mean, stepping away from smart contracts, stepping into actual basic code. When we had the Tavor audit a couple of years ago, they noticed that we had the GraphQL. So one of the things they did is they just went to their GraphQL library, ran all their set of standard attacks and they actually found something that both Geth and Basie were vulnerable to if you exposed the GraphQL. It had a very simple fix in the way you do it to keep this loop from going on forever. You just had to stop the loop because it was, it was a, you either need to make your GraphQL an acyclic directed graph or you need to have some sort of a waiting to make sure that it doesn't loop forever and produce large, ridiculous sums of data. But that's some of the things that these auditing firms do is they have much larger catalog of smart contract attacks than you can keep in your head. And that's, you know, at a base level, at the lowest level of audits they do, that's what they do is they just turn the Gatling gun at your system and just fire away everything that they can. You know, after they've done that, if you're still standing, that's when they'll get really excited to say, ooh, I gotta get creative to figure out what's wrong. But so, and so from a project point of view, you're talking about Bezu and that's one of the things I was talking about. I, you know, I'm trying to bring some of the best practices that are developed by the OpenSSF. How do you think that, you know, Hyperledger handles that and especially in the Bezu project when it comes to like vulnerability disclosure and so on? So this is something that we're trying to get, you know, our hands around. But one of the things a lot of the projects have been doing is they've been using the GitHub advisories. We have an email address that all security vulnerabilities are supposed to be funneled through. We have a committee of people who are look at the vulnerabilities and filter it to the right person and get the right response. Some of these vulnerabilities are just, they're not vulnerabilities, they're misconfigurations of some service that just needs to be set up and it's not really vulnerability. Some people are fishing for just dollars. But on occasion, you'll get an absolute gem that comes through. There's one that I'm working on. It didn't, that we got a vulnerability announcement that's gonna come out in the next little bit from Bezu. It's been patched. But while it was unpatched in those networks, if you knew what to do, you could bring it to a halt. So based on that, you know, what we're doing in Hyperledger is we're making sure that the vulnerabilities that get reported get routed to the right people and if they're ignoring them, I noticed that Ry Jones is really good at knocking on the door and saying, are you gonna do anything about this? Absolutely. Yeah, because in the responsible disclosure, it's an agreement between the two parties. I'm gonna disclose your vulnerability and I'm gonna hold off telling the world for 30, 60, 90 days while you fix it. And if you're doing nothing with it for 30 days, these are security researchers who, that's like a feather in their cap and they find a vulnerability. They're not gonna sit on it for a year while they wait for you to get your project plan together. If they don't get what they need after 30, 60, 90 days, they're gonna release it because they did their part of the game. They talked to you, they gave you your time and for a vulnerability like this, if only a few people know about it and are exploiting it, that's not the best situation. Either nobody needs to know about it or everyone needs to know about it and the responsible part of that disclosure is once you've let people know about the vulnerability and you've given them sufficient time and they're doing nothing, you have an ethical duty to tell the world. All right, well, thank you very much for your participation. I'll take back the microphones. And that's a great segue to Brian Bellendorf. We have the honor to have Brian here who doesn't need any introduction I think, but you can still introduce yourself. But one of the things we're talking about, responsibility disclosure, OpenSSF actually as part of its work, published a couple of guides on that point. The first one was to actually help projects, OpenSource projects adopt proper policies on how to handle disclosures. And then we just recently published another guide which is kind of the opposite of this, which is for finders of vulnerabilities, how to interact properly with an OpenSource project. So Brian, please, welcome. Hi, hello everyone. I think I know about half of you in this room at least, if not more, good to see many of you again. I stepped off a plane about 60 minutes ago from San Francisco, so I thought I was looking a little greasy or whatever, but. You're doing great. Okay, great. And yeah, it's really great to be with my community again. The community spent five years in until September when I stepped over to go lead the OpenSSF still within the Linux foundation. Yeah, so. September last year. September last, wait. Very September, right? Right, so actually it was like October last year. But anyways, yeah, so we'll be announcing the finders guide for CVDs tomorrow, which is intended to just help everybody who's crawling through code look for stuff. But OpenSSF has, it's like a circus. There's lots of different things going on, most of which are actually not software. We have a few software projects like Sigstore, which is a signing authority inspired a little bit by the lightweight key distribution mechanisms used by Let's Encrypt to try to sign artifacts through the entirety of the software supply chain. We have a specification called Salsa and some software to implement that, which is intended to provide for degrees of confidence and levels of attestations throughout a software supply chain. But a lot of what we do is guides and educational materials. Training courses actually that have been put up. And one of those is about a 20 hour course on writing secure software. It's something that is just a compendium of the kind of gotchas and the kind of common mistakes that software developers make, such as don't trust user provided input ever at all. Don't use it for database queries, right? Hello little Bobby Tables, for anyone who knows the XKCD comic. But most importantly, don't parse it for format strings, which is part of the mistake that the Log4j developers made. And if anything kind of the Log4j incident, which I prefer to call Log4shell because Log4j is the name of the community, Log4shell is the name of the compromise, unfair to tarnish the developers with like that mistake. But it was so symptomatic of a lot of systemic issues in the open source community. And in a way it helps serve as a galvanizing kind of force for us in open SSF to go, wow, there's a lot of different angles on how things are broken out there and maybe we can have an impact on that. But so you kind of started talking about different things that are being done at open SSF, but I mean stepping back a little bit, what is the mission of open SSF? Maybe not everybody is familiar with that. I'm sorry, the mission for open SSF is to improve the state of security and the default state of security in open source software and across the software supply chain. The difference between open source software and the entirety of this software world is getting to be smaller and smaller. About 90% it's claimed of the average stack of code, whether that's in a phone or on a server or a car, is pre-existing open source code. And when something like Log4j hits, it becomes a vulnerability that can be exploited not just on Apple iTunes website, but also on systems that run nuclear facilities and water treatment plants and apparently door badging systems also were vulnerable to Log4j. And it's something that not only do the Log4j developers care about in the downstream redistributors and the end users, but also folks like the National Security Council in the United States and other national governments who kind of asked, is this the ordinary state of things, you know, kind of a leading question or could you do something better? And in a way, the open SSF has become the, how do we as a community organize to try to do something better? Yeah, absolutely. And I very much like your statement about open source because a lot of people say all open SSF is about securing open source software. And I say, well, it's really open, it's software in general because indeed, there is no software out there anymore that doesn't use some open source. And as, you know, Daniel was touching on the point that oftentimes you have dependencies that the problem come from the dependencies. And, you know, we've seen it and a lot of companies will admit that in the case of Log4j, for instance, the first challenge is, do we use this? Where do we use it in our products? People don't often even know what they have in their product. And if you use any modern programming language as some packaging system where you have some import mechanism that will allow you dynamically often to just pull out a whole ecosystem of packages that are maintained by different people you don't necessarily know who they are. And this is a real problem and we don't even know what's in our product. L-S-L-R, pipe to grep, doesn't work anymore if it ever really did for stuff that's compiled into jar files and the like. So, and getting there after the fact is really hard. And that's one of the reasons why, I mean, one of the 10 different things that we're hoping to have an impact on specifically calls out S-bom, software bill of materials. It's not a magic pixie dust, it's not a silver bullet, whatever metaphor you wanna use, but it is a key part of trying to understand what enterprises are running and get a picture of the risk of software in body and side of that. It's kind of like the ingredients on the back of like a ketchup bottle, so that if you're allergic to paprika, you know that there is paprika in there, right? That's what S-bom's are intending to address. And so, OpenSSF, as you were talking, there's like a lot of education material, there are guides, but they're also tools that people can use, right? So, we've developed a software system called scorecards, which attempts to analyze repositories and look for certain practices within those repositories. So, things like, you know, it looks through your testing scripts, are you doing fuzz testing? Now, what it can't really tell us are you meaningfully using fuzz testing, right? Are you actually purchasing the very noisy results that you'll get from fuzz testing? Have you actually adapted it to your inputs and looking for the right places? But at least can tell us your build system invoking, you know, OSS fuzz or one of the other fuzzing tools. And if so, it raises your score a little bit. And so, there's all sorts of other automated things you can do. There's another thing called dependency pinning, where what's kind of considered a good practice, especially in certain ecosystems like NPM, is to fix a certain dependency version, a certain version of a dependency so that a rogue update caused by somebody who's stolen some credentials doesn't cause your website to go down your next build the way that happened to a lot of websites that use the colors.js module that got hacked one. Or no, I think that was turned into actually like a protest where, if I recall correctly. Anyways, but there's other, that's kind of a compliment to the best practices badge, which is not an automated tool. It's more of a questionnaire for projects to fill out. And I think most hyperledger projects have filled out the best practices badge and some of them are passing and some are still on their way to passing. It's not a perfect picture, but that's part of why I've been trying to, I have my own microphone, so you don't have to worry about it. And I've been trying to bring that up into hyperledger so that we also are good citizen in that space because a lot of this really is going to be about, getting people in all these different projects, there are so many open source projects out there, right? To do the right thing, because it's not like OpenSSF alone is going to go fix all the problems all over the place. We just can't scale to that level, right? So it's really has to have that kind of snowball effect where we're going to try to give people the right tools and the right education so they are sensitized to the problem and they can go on and spread the good word and hopefully start doing the right thing. Right, I mean, one kind of useful frame to think about this is, how do you, how do any of you decide what software to trust? I mean, do you look for like numbers of GitHub stars? You probably look for, hopefully you look at least for like a development team that's responding to comments made by users and cutting a release, maybe at least once a quarter or something like that. But do you look for projects that have, I've been third party audited, right? I know that's something that certain projects like hyperledger take as a priority when it before is something gets to a one dot or a dot zero release, right? But most open source projects actually don't even have the resources to pay for third party audits. So how do, so one of the things we are looking at or ask ourselves or put some plans around is how might you help inform people when they're going out and looking for a Java logging framework or a distributed ledger framework, right? How do you compare apples to apples against not only the automated tooling like Scorecard, which has now been used to score a million different repos, I should have mentioned that. And Best Practices Badge, I think has hit 15,000 projects have kind of taken this kind of self-attested course to other kinds of attributes, right? Security attributes and look at it at one place, maybe even think about something like a credit score for open source projects based on whatever objective data we can get about the integrity and the likely security. You can never guarantee bug free code or have automated ways to find all bugs, but you could at least help inform people on what is likely to be the next log for J? Or what's the thing that they simply want to look a little bit more closely at? Yeah, I mean, on the question you were asking earlier, how do you trust certain software, certain project? I can say that I was talking to some big agency in the US not that long ago, and they were telling us, oh, we do this analysis and we look at the Gita profiles of people. And I was like, but I do know who's behind it. And well, they didn't know. So this is the kind of stuff people do. Which is not unlike how it was, I mean, not to play the old-timer kind of card, but like when you and I, we're at similar age, first started using software and using open source software, getting to know that people behind the project was a key part of trusting a project. And that scales to a certain level, right? It scales to a couple dozen major things that you're using, but not to the thousands of dependencies that are now included inside of something like Kubernetes, which everybody then uses. So part of what we need to figure out is how do we take these social kind of cues that we look for and data that we just kind of use to form a gut instinct around and turn that into something quantifiable, something that serves not only to help the people consuming open source code, but also help the developers understand what can I do, both to appear to be improving the trustworthiness of my code, but also actually improving the trustworthiness of it. All right, well, we're out of time, but thank you very much. And by the way, so if there are people who are interested, you can look it up and there is an open SSF day for those who are subscribed to also the open source summit tomorrow, right? Yes, lots of competing stuff going on this week, so I didn't want to compete with you, but open SSF day, Europe taking place tomorrow, nine to five. If you have the bed for open OSSEU, then you can come to that. But also all the videos from the one we did in Austin are on YouTube as well. We just try to be very public about everything that's going on at open SSF and so please, at your leisure, come and join the party. Thank you. Thank you, Brian. And so to finish this, we're going to invite Hop Montgomery, who has a very interesting profile of being kind of in between all of this because he's the CTO of Hyperledger and his background is a security researcher. Hey, Arnaud, thanks for having me. So how do you see all these things? So first maybe from like the security, what we're talking about more with Angelo and Dano, what do you think is going on and what's important? Well, I think there's a lot of stuff and Angelo and Dano also talked a lot of, they both made a lot of good points. Is there anything in particular you'd like me to address? No, but so I mean you, so you have this enviable position to be in Hyperledger's CEO seat where you kind of see all the different projects. There are things that you see that should be maybe people should be more aware or pay more attention to across the board or that they are doing. Do you see common threads? What's your view on this? Yeah, I mean, I think a common thread is that in many cases we aren't organized enough. We could have better protocols and better methods for say, how do we handle a CV, right? We haven't really codified that process and all projects sort of do that differently. Some do it better than others. But if we put these things down on paper, they might work a lot better. Same for the bug bounty program. I see Dave out there who responds to a lot of those males but again, we'd like to broaden that. So maybe people don't know about the bug bounty program we have so why don't you say a word about that? So we have a bug bounty program that's primarily focused on fabric that offers various different rewards for certain types of bugs but we'd like to increase the rewards. We'd like to see more action on that. We have budget and we're not spending all of our bug bounty budget. So we'd like to find the equilibrium where we spend what we want to. You know, we can't do the Ethereum million dollar critical bug but you know. And historically it's been a bit of a challenge, right? We get a lot of things that are not very relevant and it still takes time to, you need to invest time into processing the submissions. That's what Dano said, right? He said that most of the time it wasn't very good but occasionally you got a gem and those gems you got made it any worth it. It's a good price to pay to get it, yeah. And so on the cryptography side, I mean, you've been working in Ursa which is one of the Hyperledger project. What's the status there? With Ursa in general? Ursa. Yeah, so Ursa is still widely used by Indy and Aries. It's also used by Iroha and the Iroha team is adding some maintainers to Ursa to keep the interfaces where they want them. So yeah, I mean in the beginning we sort of had a dream to modularize cryptography throughout Hyperledger. That's been sort of difficult. What's the difficulty? You know, it's always one of those things. It's like anything else where it's a short term cost for a long term gain, right? When you are protected fabric, right? You all put a lot of, you know, there are a lot of hard coded calls in fabric. You know, and it's, you know, I know as Angelo says, you know, the goal, the dream is modularity but at some point you do have to deliver a product, right? And sometimes these get short cut, right? You know, and so there's always a balance but you know, it's hard and it takes a lot of effort up front for a long term gain. It can be hard to get people to see that. I see Angelo not being so... The programming language also becomes an issue, right? That's been a challenge throughout is when we want to try to have a module like this that's used by everybody, all the different projects use different programming languages and it makes it hard for something like Ursa to say, well, we're going to use this language and... Yeah, for Ursa in particular, it's mostly written in Rust and the calling Rust code from Go is not in a, still not in a great state, so that's unfortunate, but... So that's one of the barriers, right? Yeah, that's definitely one of the barriers. It's a barrier to Ursa. It's not necessarily a barrier to modularity, so... I can say, being part of the TSC, we get these reports from the project and for quarter after quarter, when Ursa reports on the status, they have this call out, come on, please tell us what do we need to do to make it more useful to you? And unfortunately, it's a bit of a call in the void, right? Yeah, I mean, but in an ideal world, I would love to use fabric with BLS signatures. You have to tell Angelo. Oh, Angelo knows. You know, stuff like that, and that wouldn't necessarily require Ursa, right? Yeah, but, yep, please. Is there anything that wouldn't be improved by BLS signatures? You need to repeat the question. So, Dana asked, is there anything that wouldn't be improved by BLS signatures? So, if signature verification time is your bottleneck and your blockchain system, then that's a problem because BLS verification requires a pairing, which is substantially more costly than ECDSA verification. But that's really the only downside. BLS signatures are smaller, and they allow more exciting functionality, right? Like, you essentially get threshold gates for free. So, you get free threshold signatures, multi signatures, all that stuff. There would be a way to avoid, if we had an interface of an in-memory channel, an in-tent process channel, that's actually the base of the GIMP. So, what you do is you have a client server kind of protocol, and you connect to the... Yeah, you need to speak in the microphone because people are going to listen to the recording and get frustrated that they couldn't hear the answer. No. I just want to say that one of the basis for crypt agility is to have this client server approach. So, when you ask for a cryptographic primitive, so essentially you connect to a server process in-memory, so it should be very fast, the server will generate the cryptographic object and send it back. So, that would avoid any problem with languages because essentially the other process can be written in any language. They just have to both use the shared memory, and then that should solve the problem, right? Yeah, and that's a perfect solution too because you can also switch protocols as well as languages, right? So, yeah, Angelo is describing basically the dream architecture. So, is Ursa going to attack all this? Well, this is not only... This is a both sides thing, right? This is more of a, you know... And you could even do this in Go. I believe there are BLS implementations in Go. They're not Ursa, but they're still reasonable, so. And is Ursa used outside of Hyperledger? Yes. So, that's successful from that point of view. I mean, all of these things, we never really know how often they're used, so. That's part of the open source status of things. But, okay, so I think we're out of time, unfortunately, but thank you. Is there anything else as CTO you want to tell people? I know, thank you for hosting. This is a very interesting discussion. Yes. Well, thank you for being here. Thanks everybody in the audience, and thanks to Angela, thanks to Dano, thanks to other people like Dave, who does a lot, a lot of work for security, so. All right, thank you. Let's go have a drink. Thank you.