 Good afternoon everyone, thank you for being here today, it's a pleasure, it's an honour to present some of the work that was done earlier this year around the Security Code Review of the Ursa Library. My name is Pierre Robert, I'm the CEO and President of the Digital Identity Laboratory of Canada and presenting with me is Art Montgomery, the CTO of Hyperledger. We will keep some time at the end for question, we'll ask that you keep your question for the end but we would love to hear your question and be able to exchange. So the agenda for today, we'll just do a brief introduction of who is the Digital Identity Lab, like what's our business, what's our interest in this, we'll do a bit of a level setting about Hyperledger, about Ursa and then we'll go into the detail of what's the genesis of the project, what was the engagement, the scope, what did we do? We'll talk about what we found and at the end we'll go more into detail about some of the findings on the specific of those and some of the mitigation plan that were put in place. There is, in addition to today's presentation, which is recorded, there's also a blog that's available, I have a URL here, it's more oriented for business people that want to know about the process and what was done and why we did that. There's also the full report. Don't remember how many page it is, I think might be a 40 page, 50 page report, so everything that we're going to talk about today is documented in the report and here's the report, so figure would give you a big QR code, so if you want to snap this, you can go straight to the report and it's publicly available, it's meant to be public good, it was, the work was made possible by sponsored and the idea was always to contribute back to the community, so here it is. Very good. So who is the Digital Identity Laboratory of Canada? So the lab is a fairly new organization, we're just over two years old, we're a not-for-profit organization that exists to promote the adoption of digital identity. We don't do that by building more code, we don't do that by selling solution, we do that by providing arms length, technology neutral, assessment and work around assessment, conformity and interoperability. We're based in Canada but we have client around, we have client in the United States, we have client in Europe and we work with public sector and private sector organization or identity, if you're not the identity geeks or identity is contextual or identity is evolve over time and it's provided to us from the public sector in Canada, we get our fundamental, foundational identity from the province where we're born, but we also have private sector that shape our identity, am I properly insured for my car, am I a director for my company, so on and so forth, so there is really influence from public and private sector and so the lab work with these organization as a third party, uninterested party looking at the technology, looking at the implementation and make sure that we have conformity and interoperability so that we don't end up with 50 wallet in our mobile phone when digital ID will be deployed at large scale. As I said we're not an incubator, just repeat, but we're not an incubator, we do not build technology or sell technology. I will turn over, I felt that would be more appropriate for Hart to talk about Hyperledger and Ursa right now and maybe briefly Indy. Sure, absolutely thanks Pierre and hi everyone, I'm Hart. So many of you are probably familiar with the projects Aries, Indy and Ursa. This is the Hyperledger digital identity stack, so Indy is a distributed ledger, Aries, well this isn't exactly right, I'll sort of call it the wallet layer and Ursa is a cryptographic library and all of these were sort of forked out from the Indy project over time to have a more modular implementation. Should I hand it back to you Pierre? The only thing I will add to this slide, no coffee here, no disaster, okay, the only thing I would say that Ursa is the shared library between Hyperledger project where we wanted to all of the security implementation and the cryptographic implementation were made in one place, reducing risk in implementation, accelerating deployment, so on and so forth. So there's a number of reasons to do that and that's where that's where Ursa brings a lot of its value. So what's the genesis of this project, why have we done this work? So it really came from digital identity in Canada, so we ear globally, we've heard from a number of speakers today that digital identity and use of blockchain is happening around the globe, we've read about, for example, it's happening in a big way in Canada. We have public and private sector organization that are looking at deploying at scale digital identity solution. I mentioned that foundational identity, the fact that I'm Pierre Roubaire and I'm a real person in Canada, it's the province where the province or the territories where you're born that give you your foundational identity. If you're an immigrant to the country, it's the federal government and we also have the private sector that will say if I'm or academia that will say I have a diploma or I'm an engineer or that I'm properly insured, so on and so forth. So that's where it came from. And as those organizations are looking to go in production, there's a due diligence process that needs to happen and so this is really the genesis why we've done that. There was no arm's length public and dependent review of the software library. It's open source, we all know it's there, but if nobody goes, it's not that useful. And so if you think about some of the sponsors for the project and there were three provinces and one payment network, they have significant security policy and risk mitigation in place to ensure that before they go in production, the teaser across on the eyes, we have the dot on the eyes. So it was important for these organization and making sure that the project doesn't get stuck at the last minute to have that security due diligence performed by a neutral party. The spirit in which we did the work was trust but verify. You might have heard that it's coming from Privacy by Design. Privacy by Design was created by Anke Wukian, a Canadian, and Privacy by Design is anchored in all of the digital, the majority of the digital ID deployment that we see in Canada. So in that spirit of trust but verify, which mean we're not expecting to find a backdoor, we're not expecting to find a big problem, but it's not sufficient to just say it's there, it should be okay. So we went in and have a look at the code, have a look at how best practices, if best practices were visible, if they were implemented properly, consistently and so on. One of the reason that Hyperledger was adopted by public sector organization in Canada is this idea of working in the open, where the government's not working in a closed room where you have no clue what's happening and maybe they'll spying and they're, you know, they don't want anybody to say we're spying on or citizen. They want to work in the open, the code is available. So that was one of the great value of using Hyperledger using Hyperledger technology. And in that same spirit, when right at the beginning, when the work was started, the idea was to give back that report. So it's good for them, it's public good for the citizen of Canada, but in the, in this idea of supporting community, supporting deployment in other industry, in other country, it was important for our sponsor to make this public and we're really happy to do that with you today. Just briefly, what's the, you know, why a security audit, why is that important? And for some of you, it won't be obvious and, but maybe not for everybody. So we do security audit to mitigate risk ahead of time. So the idea is really to find if we have, if there is a risk of fraud, disruption or losses, and we do that security audit before going in production. So we try to, we try to have a clean house as much as possible before we go. We might find something in production that might happen after and there's other, there's other type of security tests we can perform for live system. But the security audit's really meant to is a check that before we go. It is not a punishment from, you know, the business side to the technology side, because it takes time, you will be, you know, we need to produce documentation, we need to produce evidence, but the spirit of this is really a business continuity. We want to launch a service, we want a successful launch and we want to be in production in three months and six months and five year from now. And digital identity, it's particularly important, digital identity. And if we think about the, about government, there's a significant reputation risk. Government cannot launch something and pull it out two weeks later because whoops, we missed something and we didn't, we didn't, you know, didn't do our own work. So it's particularly important. And that's, that is the reason that the audit, one of the reason that the security audit was performed. So what was the scope of this review? Because there's many way we could have done this. I think about it really in two blocks. So there was a security review. It's a library, it's a shared library. So our, how's the entry point to the library done? What is the, what is the, how can we call those entry point, how's the API structure? We're looking at the coding standard. Do they, is their coding standard that we're applied? It's a community library. So are we all doing the work in the same manner, in a similar manner? And is it consistently implemented? There was a review for coding and language issue. It happened, a fresh pair of eyes. You know, when, when you're used to working the code, sometimes you just don't see it. The fresh pair of eyes coming from the outside sometimes help to identify that. We've looked for logical flaw and, and also when, and more recently, somebody talked about, you know, recently the use and dependency of third-party library or third-party package. It made the news last December. Let's, let's, let's, let's learn from that and let's look at, if, if there is dependency on third-party package in library and does that create, what kind of risk or is there a known risk associated with these in, in the ERSA library? So that's security. On the crypto side, what we wanted to do was to look at the cryptographic implementation. We wanted to look at the entropy. Was it properly implemented? The random number generator and so on. And we wanted to look at best practices. So key land curves, etc. The scope of the review was not to confirm the mathematical fundamental of the crypto. So the, the library is using non-cryptographic algorithm. What we, what we did is review how is the algorithm, what is the, how is the algorithm supposed to be implemented and is the code implementing is, is, is properly implemented that logic. So we're not saying if, if a BBS plus signature is really solid or we're not making a statement about the value of the crypto. We're saying that the implementation of the crypto followed the description of the standard. And that's an important nuance. Very good. So the engagement in the scope. So it's been a four, four month project. It took longer to paper the deal just so you know. But it's a four month project. There was, we started with the scope and the scope definition and the team in place. I've described those now. Then it was to look at which, which version, which, which version of the code would we use? That's pretty quick. And then it was the examination. And, and from the examination we moved to validation and remediation. We've not done that in isolation. We've worked, the digital identity lab team worked with the hyper ledger team. We also work with the Ursa, Ursa team. And I want to take, take 10 seconds to thank them. So Hart was, was part of that. Mike Lauder, Cam Para and Nathan George all participated in the review and, and the validation and the remediation plan for this. And so, so we do an examination. We find something. We make sure that we have agreement on the finding about the severity of, of the finding and, and the, the scope of the finding. After that, we can go into remediation. What is the plan? How will we, will we address this issue? How will we address this issue? When will we address this issue if there is some? And then there's reporting. And we're here today to, there's been a written report. But, but it, it's not sufficient to keep this for herself. Somebody, we've heard a number of time earlier about working in the open, working, providing transparency. This is, this is, today is another example of being transparent about the work, the great work that's being done here. So kind of good news. There were some finding. The worst scenario is that there is no finding. And then people start questioning if, if, if the work was done properly. There were some minor deviation that were identified. We will go into the detail. Hart will, will, will go into the detail. But the good news, there was some finding, better news. We have remediation plan for all those finding. So we had funding to do the security review. We didn't have funding to address bugs or issues, independent bugs and issues. And so the worst, the other worst case scenario is we find something significant and we have, we, we, we, we don't have people that are able to address the problem. We ended up in the, in the sweet spot, minor issue, remediation plan for all of the issue remediation plan or, or, so planning or actually fixes for those plan and Hart will, will, will talk about that. That's it here. So bottom line, the review concluded that it was sound to be able, sound to use the Ursa library in a production setting. There is some, there is, there's always some good practices and good security hygiene that would need to be applied to any project. These were highlighted in the report, but there was no, no boogeyman. There was no backdoor. There was no nightmare that, as expected, I should say, that were, that were identified in the Ursa library. Now we're going to a deeper dive on some of those, those findings. Awesome. Thanks a lot Pierre. And I guess now I will, I will just stand at arm's length. Sorry about that, everyone. So I'll go into a little bit more detail about some of the findings of the audit. And this is not only interesting, I think, for Ursa, but also in general, is it sort of reveals a lot of the difficult trade offs you have to make when thinking about security. So the first point I have is, well, there are a lot of trade offs and security trade offs can be tricky. So our people, everybody here is familiar with digital signatures, right? Yeah, great. So suppose we have a signature scheme where users post some kind of verification keys on the blockchain, right? And I have sort of two algorithms here, right? Or two algorithm sets. So an algorithm set one, I give you a verification key validation algorithm, which is this blockchain verify, and then this verify signature algorithm, right? And the idea is, you know, you're supposed to call the verify algorithm to check the key, and then you can verify the signature, right? And then algorithm set two, we we only give you one algorithm for verify, and it checks the key, and then checks the signature, right? And it forces you to do both at the same time. Does this sort of make sense? So there are some advantages and disadvantages to both approaches, right? So algorithm set one exposes all functions. So, you know, this is a minus because the users have to remember to call this this blockchain verify function, right? Or else there could be potential issues. But on the other hand, if you're a smart user, you can avoid repeated calls to this blockchain verify, right? If you're using a public key and using it over and over again, right? Well, you don't necessarily want to have to call that function over and over again, right? And on the other hand, you know, this algorithm set two forces this verification key verification. And, you know, the users don't have to remember to do this call. But if you're calling if you wanted to call this verify function on the same public key again and again and again, it would be very inefficient, right? And this is a lot of the this was a large class of things that sort of came up in the security review as minor issues. So Ursa often used this approach on the left. And the point is tradeoffs like this are tough. And it's often said that this sort of approach on the left is, you know, well, you're giving users a lot of rope to potentially hang themselves. But you're also giving smart users a chance to write more efficient software. And the security auditors often recommended this. And we as a community went through many of these bugs and what many cases we said, well, we actually want to use the left approach because, you know, there are two big losses of efficiency for using the right approach. So one example is the security audit told us that, you know, the implementation does not check that a signature is an element of a prime order subgroup. So basically, this is, you know, checking that, you know, sort of something is well formed. But, you know, we do we need to check it every time? Or do we just check it once? Let the programmer check it once, and then, you know, not sort of waste efficiency, continually checking it again. And this was sort of a theme. This is again, a public key check. So the BBS public key validator did not check the subgroup every time. So, you know, the question is, again, do you want to force this on the programmer? Or, you know, do you want to give the programmer the freedom to make the call whether they need to do this or not? Once again, this is the same kind of proof check. And again, as the feedback says, you know, this was considered when we were actually talking about designing the system, we considered this. And we didn't implement it due to performance concerns. But it's something that we could think about in the future. And it's also been discussed that we could have, you know, multiple function calls, sort of a safe function call. And an unsafe function call for people that wanted more performance. And yet again, same idea. So another really big thing for security is to keep your libraries updated, right? You want to make sure you're always using the latest staple release of everything. And it turns out we were a slight version behind on OpenSSL. So implementation caution, you know, always maintain the most recent version of the libraries, you know, you have to stay on top of things. I think the Ursa, Ursa was like about a month behind, I believe, on this. So just just staying on top of things. And finally, one thing to do is to carefully consider your threat model and make sure that that threat model is articulated to everyone. So who here familiar is familiar with the concept of threat modeling? Most people, some people? Awesome. So, you know, and not only do you need to define your threat model, but you need to think about attacks that are in some users, threat models, but not others. So what if I have a system where, you know, my threat model doesn't include an attack, and Pierre has a system where he needs to be secure against something, you know, and what if the fix is even computationally expensive? And this is something that happened with the, with malicious keys and identity mixer. And the basic idea is the spec of identity mixer did not call for a certain type of public key verification in the very beginning. So if you had an issuer that was malicious at key generation, then they could presumably break linkability. Now, some people have said, well, you know, if your issuer is malicious at key generation, you know, that that's up scope, you know, that that's not a reasonable thing to consider. Why are you getting credentials from people? If you think they're malicious at the time of key generation? And others have said, you know, well, you know, we should be, you know, we should really handle everything and we should give people the opportunity to handle this if they want. And so we're in the process of building a public key proof to fix this. It is quite expensive, but then again, it will be optional. It will definitely be in this left model that we considered. So yeah, I'll give it back to Pierre. That was the bulk of the defects raised in the security review. Okay, all right. We talk about threat model. I'm going to go back here for a second or anyway, we need to look at the threat model, but we also need to look what we're dealing with here. The Ursa library is just that it's a library. It's not a finished product. So I think that when we do the code review and when we want to make sure that the library is properly implemented. It also I think I like the fact that we need to look how it's being used and how it's being consumed in other application. And so this is where you're able to make those trade off between performance and security. And particularly about the last point. So we also need to think about community and acceptance and deployment. So having having a bad actor, ill intent could negatively impact others. And this is the reason that we need to highlight those. We need to be aware of those. It's part of your risk mitigation. Should that prevent you to go into production. Your organization risk profile will dictate that. But it's important to be aware. I want to take the time to identify who were the contributor to the report. So it's one thing to publish a report. It's a good thing usually to know who's behind it. So Neil Kettle, Matthew Barker, Justin Gage, Lish Rory Francis, Bruce Daly and myself were the primary team that worked on the report. If you have any question about Ursa, this is the Discord server you are barcode that you can connect and have your question after today. We'll be taking your question today but after that if you want to connect. That's kind of it. Actually that's the report here. So we did a bit faster than we expected and that's great that gives you that gives some time for maybe some question for everybody. So the question I have was that you specified on the left hand side that two separate functions are operating across and on the right hand side that was combined together, right? But from a normal perspective, it could be that the right side function could be there with a flag that have a default value that both the things are executed together and somebody wants to override, then they can pass a flag. So the check would not happen directly the other function would happen. So it's something both best of both of us could be actually provided. Yeah. Yeah, something like that is definitely a good idea. For most of our use cases or at least the people not my use cases, the people I talked to about use cases we couldn't find anyone who didn't lose a lot of efficiency by doing the right hand side. So, you know, we weren't really seeing anyone who wanted that. So it was it was hard to motivate people to build something like that when there wasn't a lot of, you know, need or desire for it. But it's a great point and sort of the ideal world, you would have something like that with, you know, a default safe, but, you know, a flag where you can say, hey, don't actually do this check, I've done this check before. Great question. So what does Ursa use open SSL for? A handful of primitives is the short answer. So it's like, you know, I could start trying to name them, but then I would probably inevitably get one wrong. There are some that are used for only testing. Does that count? Sorry for the lack of a totally complete answer. Yeah, I mean, if you if you wanted to dig down through everything, sure. Any more questions? Yeah, my question is for the review, besides the specific issues that were found, was there any practice like best practice issues found with like pull requests and managing code, things like that? That's a great question. So the security review focused entirely on the code and not on things like, you know, best coding practices. So, you know, this is something we've actually focused on on hyperledger as a whole is best sort of secure open source coding practices. You know, there we could have a whole talk on this. And you're probably familiar with the open SSF. So there's a whole new Linux Foundation project that is entirely focused on best open source coding practices. I believe they have a workshop day on Wednesday. They're giving a ton of talks if you're sticking around for OSSEU. So I would definitely look into that. So, yeah, I talked a little bit about this yesterday, the keynote. If you're if you're at the Member Summit and this will come up more. So Arno LaHorse is leading a security discussion later today and he will be interrogating Brian Bailendorf who's now the open SSF director. So he can answer all of your questions about that. And in the TLDR, for OSSEU is that, yes, we, you know, in OSSEU and in all hyperledger projects, we try to follow the best open source coding and software practices. So, yes, there's a library. It's the library. There's how you use the library and because you compile it, it's, you know, it's a machine you're compiling this clean is, is the system where you're running your system is clean. So some of those, some of those item and I'm not sure if their best practices as much as, as being aware of, I guess they are best practices have been identified in the report and you can see the detail in there. Yeah. Release management and all of this stuff is a very important topic. Did that answer your question? Are there any other questions? If not, you know, thank you all for your time. And yeah, please feel free to, to talk to us afterwards. If you have more questions or comments. And yeah, thank you all for coming.