 Today, we're talking about the state of coordinated vulnerability disclosure in Canada's federal government. As mentioned, I work at the Bryson Leadership Lab and Cybersecurity Policy Exchange, which is a set of tank tanks at Bryson University, and we are so all, you're all, you know, so lucky to have firsthand last night. So professor of University Ottawa, the faculty of law here with us today as well to answer questions that are on offer much insight. We're also standing on the shoulder of giants. We have many people who was work we're building off of and we're also very, I'm very honored just to be here in this space. I love hackers. I kind of like, is there a hacker myself in some ways? And it's just really great to be here. And so now we'll toggle back and forth in terms of our sharing knowledge with you today. So I'd like to start with a little story. Hoping my slides work. Yes, it's a tale of good faith Dutch hackers. It's a, yeah. Well, we had, you know, in March 2008, universities in the Netherlands discovered smart cards were used for transit and government buildings around the globe. And I just want to make sure that folks can see my slides before I continue. I think that they're not, we can see them. Oh, excellent. So university researchers in the Netherlands realized that these RFID smart card readers, you know, they were being used in government buildings around the globe and were being rolled up for use in transit across the Netherlands. We're using a random number generator for encryption that was not random at all. So I include this little meme, you know, an old man saying, hey, we need a random number generator and the guy in the corner is like seven. And I feel like this is exactly how that random number generator was determined or decided upon by the people who made that smart card. So the hackers decided to inform the chipmaker NXP of the flaws that they found, as well as the Dutch government agencies involved. And they said that, you know, they said they wanted to find, you know, release those findings at an academic conference later that year. And they wanted to give them six months to patch those flaws, which interestingly enough, the security intelligence agency in the Netherlands found reasonable. But in June 2008, NXP decided to sue the researchers. What did they want? They wanted a restraining order for their, you know, against the research publication despite the university researchers giving six months to patch. And what happened was in July, the Dutch court rejected the request to suppress the information, which is a really significant case. You know, how they decided their, you know, their habit came to that decision was that they decided that their algorithm of question was not copyrighted, was never going to be made public by the researchers. They didn't see evidence of the criminal intent of the researchers and that therefore the justification on the limit of freedom of expression for these researchers was not, you know, it hadn't made the threshold needed. Why significant? Well, we know that when you find vulnerabilities and you disclose them, that you actually can face significant legal repercussions for doing so. This is true across North America. We all know the Computer Fraud and Abuse Act and how overbroad it is and how it reaches into the, you know, the corners of hacker research, which and has a chilling effect on that research. So that was the question we wanted to know is like, what is the Canadian government doing about this? This is such a significant topic. This is building off work of my own on bug bounty programs. What we have found in our work is that, you know, just as you can approach security with that mindset of opacity and absurdity, you believe that you hide something, therefore it's secure. We have uncovered that policy in Canada has a similar approach. So in Canada, what we've uncovered is that policymakers think that if they hide how they handle things, that that is making them secure. And by and large, what we see is three problems. First is that the Canadian federal government agencies don't appear to have guidelines or processes for vulnerability disclosure. So you find, you know, vulnerability in a website of immigration Canada, or you find something that's a flaw in this like some sort of public facing system of the government, and there is no, you know, clear way to disclose that. Hackers can still also face significant legal risk in Canada when they discover and disclose those flaws as well. And we also found that there isn't transparency as to how vulnerability information is treated. So you can disclose your vulnerabilities to an agency that we'll tell you about in a second or a few minutes. And it appears right now that the vulnerabilities can be held for potentially offensive purposes rather than being patched, which is a significant issue. So pass it over to my colleagues. Thank you, UN. So let's clarify what we mean when we say coordinated vulnerability disclosure. Now, vulnerability disclosure is when information about a vulnerability is provided to a party that's likely unaware of it. There's varying options when it comes to disclosure. Non-disclosure is when all vulnerability information is kept private between the reporter and the system provider. So the vulnerability is never made public. Public disclosure can be done fully or partially where some or all the information known about the vulnerability is disclosed publicly before a patch has been made to fix the vulnerability. Now, what makes coordinated vulnerability disclosure different from just vulnerability disclosure is that it allows the middle ground to be reached between non-disclosure and full disclosure where the finder and the system provider come to an agreement that the system provider will provide a patch. And until that patch is deployed or until an agreed upon time has been passed, the finder will keep the vulnerability information private. Once the vulnerability has been addressed, then the finder can publish information regarding the vulnerability that they found. So considering this coordinated vulnerability disclosure or CBD, it encompasses principles such as reducing harm, presuming benevolence of security researchers, avoiding surprises on behalf of all parties, incentivizing ethical desired behavior, improving disclosure processes and recognizing that vulnerability disclosure is a wicked problem. It isn't something with straightforward and perfect solutions because it's a multifaceted issue. Now, you've probably heard us repeatedly say the term finder or disclosure and that's one of the key actors when it comes to CBD. It's the person who was reporting the vulnerability that they found. There's also the system provider who creates or maintains the product that is vulnerable. The, there's also the employer who's in charge of deploying the patch or handling other remediation efforts and the coordinator who facilitates the response process. Now, with respect to time, I'm not going to delve into each of the CBD processes or phases, but as you can see here, there are different stages of the CBD process from discovery up until public disclosure. Now, in terms of risks of implementing CBD programs, well, first and foremost, the organization needs to already have strong cybersecurity infrastructure. Otherwise, they'll be inundated with many duplicate reports which can in turn make them unable to provide consistent and adequate communication with all of the reporters, subsequently leading to frustrations. There's also the risk that information will lead to the public before patch has been deployed and the risk of the vulnerability being exploited. And also, there's the risk that organizations and companies could be using their CBD policies as hype or marketing to simply improve their reputations rather than having interest in actually improving their vulnerability disclosure processes. Now, also there's a whole host of benefits of CBD and that includes the fact that software is always going to contain flaws which may be missed during the development and testing phases. CBD policies also provide legal clarification. They help build hacker goodwill and trust and they make the triage and repair processes much more clear for all parties involved. Okay, off to you, UN. Thanks so much, Doug. Yeah, there's so much that is good and that can come out of coordinating vulnerability disclosure programs. And there's also another thing I might add to the risk related to vulnerability disclosure that comes external from the external sort of environment. And one of those things is the labor implications of paying people to find flaws. So you treat hackers as workers but maybe they don't have the rights of workers. And that's something that our report that will be coming up this year that I've worked on looks up for the debt. In terms of vulnerability disclosure generally though, we wanted to sort of identify some of the best practices that are emerging. We decided to, well, we actually, we didn't decide to but we uncovered in our research on this topic that the Netherlands and the US seem to be leading the way in this area. What we've discovered is that the Dutch approach to coordinate vulnerability disclosure is really marked by guidelines versus legally binding regulation for laws in general on this topic and the need to consider good faith which means the intention of the hacker and disclosure. So what we've found is that the National Security Center in the Netherlands, the Cyber Security Center that is which operates from the ministry out of the ministry of justice there encourages but does not require that organizations and presumably agencies have coordinated vulnerability disclosure policies. And this is, you'll see in a second how this stands in contrast to what's the case in the US. They, you know, the National Cyber Security Center also acts as an intermediary if the organization's response is inadequate. So let's say you disclose a fly you find in Microsoft systems, but it's an outlook. It's, you know, it's found an outlook and the government uses outlook for email. If Microsoft doesn't respond in a way that is appropriate then the National Cyber Security Center in the Netherlands could act as an intermediary to help make sure that that vulnerability is patched. What's also really important is that, you know they have through these guidelines and through a policy letter clarification around the expectations and obligations for all parties. And as you see, if you check out this coordinated vulnerability disclosure guideline which we've analyzed in the course of our research we've identified that the National Cyber Security Center in the Netherlands actually sets out what they expect of hackers. So they say, don't do social engineering. Don't engage and distribute that denial of service attacks. And they also, you know, on the other hand say what they will be obligated to do because of their promises to hackers. They promise to respond within certain amounts of time. They promise to patch the flaw. They promise to work with the hacker who discloses the flaw to decide when to publish and how to publish the information that's been disclosed. In terms of law, before the prosecutor in the Netherlands lays criminal charges they must consider the intention, proportionality and what's called the subsidiarity principle before those charges are laid. So they consider, you know, what the hacker intended to do. They consider the impacts of the disclosure in comparison to the harm, for example. And they also consider it did the hacker disclose to the most immediately affected institution or entity without disclosing to a broader group of people. Basically, do they disclose to only the people who are affected by the system? Let's contrast this to the U.S. federal approach where what we see in contrast to the Holland or Netherlands approach really is regulation. It's more heavy handed. So there's requirements around having vulnerability disclosure policies and there's somewhat of a reduction of legal risk. But as you can see, it's very different than what is occurring in the Netherlands when it comes to vulnerability disclosure. So as of March 1st, all federal agencies have been required to run their own vulnerability disclosure programs and policies, which is hugely significant. It's really been since 2016 that federal agencies have started on an ad hoc basis in the U.S. context, you know, but implementing their own vulnerability disclosure policies, including paid programs. But the fact that now all federal agencies are required by the communications and infrastructure security agency to have those VDPs is really significant. And there have also been questions raised around are these agencies, you know, is there adequate sort of personnel and resources to handle that? And that's been a question raised, but this is like the U.S. approach to vulnerability disclosure at the federal level. In 2017, as well, the U.S. Department of Justice stated that the existence of a vulnerability disclosure policy substantially reduces the likelihood of being found liable under the Computer Fraud and Abuse Act. And there was also a sort of guideline released saying that in 2016, saying that if you circumvented the DMCA, the Digital Millennium Copyright Act, while engaging in hacking it to be such that you circumvented the security measures of a copyrighted set of materials, then you could potentially be exempt from the applicability of copyright law. But it's very true. And I build on the work of Amit Alhazari's work to say that significant legal risks remain in the U.S. context. I want to also show you what we've uncovered in our work on how hacking, or how law applies to hacking in Canada. We're building on the research of others who've written about this topic as well. And I don't want to go through each slide in particular or each row that is because, you know, feel free to screenshot it because I encourage you to keep your eyes peeled for a report that we'll be offering and be publishing in the next month or so on this topic, which will leave us out in more depth. But as you can see when you engage in hacking, including on a set of secure your penetration testing, this can invoke aspects of the criminal code related to unauthorized use of computer, fraud, mischief, or willfully destroying or damaging property. In one case, actually labeled what had happened causing chaos, willful interception of private communications. Also other activity involving the data of the computer can also trigger, I know, unauthorized use of computer data. If you engage in social engineering that could be seen as identity theft or fraud possessing and even using the devices made for hacking can be seen as a contribution of the criminal code. And then you also have the Copyright Act, which is applicable when you are circumventing security measures, including decryption for copyrighted materials. We haven't yet fully finished researching in our work as well as the implications of privacy laws, civil law, including data protection law, and even elections law actually pertains to hacking in Canada and makes this activity quite legally risky. Finally, for this part of the talk in terms of legal risks, I want to build on the work of our professor, or for the professor at a behestful here in Madsen, but they covered that there are 40 plus laws across Canada that protect whistleblowers. And the reason why I bring this up is because you'd look at Edward Snowden and you think, what law could have protected him in the acts that he engaged in? And he was coming not from the inside of the government for a contractor. And you think that whistleblower protection law would apply. The TLDR is that whistleblower protection law is doesn't really, for a second person, who's going to disclose a flaw found in the federal government systems. And there's a few reasons. Only a few of these laws can protect security measures in the first place. And these criteria have to be met. So the person would disclose an issue that violates a law that's a pretty high threshold. So it's not the potential that a law is violated. It's not even that this person's data can be stolen. It's that the law has to be violated or would violate a law. And the person has to be an employer contractor of the organization, which means that there's limitations upon who can disclose from the outside. And then disclosure has to be made to a higher level officer or a specific governmental agency, which means you can't disclose to the public, except when there's an immediate risk to public health, safety, or to the environment. And the time constraints involved prevent the use of regular internal mechanisms for disclosure. Also what's notable is that data protection law and or privacy law, depending on how you label it, can provide some measure of whistleblower protection law for breaches of data. But this is only the case in provinces excluding Quebec. And that's really significant limitation as well on deficit and law. I'll pass it over to Steph as well for this next little bit. Thanks, UN. Now, in terms of Canada's approach to coordinated vulnerability disclosure, there's currently no clear policy framework in Canada regarding security research. At present, if you have find any vulnerabilities on systems owned by the Canadian federal government, there is no straightforward path for you to disclose it, nor any clarification as to how the vulnerability would be remedied. Typically, computer emergency readiness teams and computer security incident response teams, otherwise known as certs or C-certs, are groups that help handle cybersecurity incidents and vulnerabilities. Canada's official national cert used to be separate from the government, but it has now been absorbed by the Canadian Center for Cybersecurity, AKA CCCS, which is managed by the communication security establishment. Now, the CCCS's website, it allows you to report cyber incidents, but not necessarily vulnerabilities. Looking here at the definition of cyber incident, it appears to only involve wrongdoing or potential for wrongdoing. Since the discovery of a vulnerability is a discovery of a condition which may give rise to a cyber incident, then it can be interpreted that the CCCS does not actually facilitate the disclosure of vulnerabilities per se. Now, something interesting is that after revisiting archive versions of the CCCS's website, we discovered that they made a major update just last week. Before this update, there was simply a contact page with a general email address for sending reports. Now, the website's reporting mechanism is a bit more streamlined. It guides people to different reporting processes based on the type of actor that they are. Even so, the fundamental aspects of coordinated vulnerability disclosure are missing. The CCCS still does not promise that vulnerabilities reported to them would be disclosed to the impacted agencies or system providers, nor that they would work on mitigating those vulnerabilities. Now, this is a concern because Canada's handling of vulnerabilities is quite secretive, especially compared to its global peers like the US, UK, and Australia. Unlike those countries, Canada's processes for deciding whether or not to disclose or hide vulnerability information for national defensive and offensive use is not known to the public. This procedure is called different things. In the US, they call it their vulnerabilities equities process. In the UK, they call it the equities process, but regardless, all that we know is that the CSE has a framework for this decision-making process and involves experts from the CCCS. But otherwise, there's very little known about how they decide whether to disclose or hide vulnerability information. Now, moving on, here are our tips for vulnerability discovery and disclosure. And please take note that this is a gray area, so we are not, and what we are seeing is not legal advice. Now, when discovering vulnerability, disclose that information to the organization or system provider that owns or manages the software and or the government agency that is involved with the product. And also understand the rules of the game, taking into account what types of activity is allowed and what the legal responsibilities are. Do absolutely only what is needed to demonstrate the vulnerability's existence. And lastly, do not publicly disclose the vulnerability information until after the agreed upon timeframe or after the vulnerability has been remediated. Thuring it back to you, UN. Yeah, yeah. You know, you've highlighted the deficits in vulnerability disclosure, Canada's approach to vulnerability disclosure. And it's nice to have some tips, but again, we cannot provide legal advice for obvious reasons. And we have to see that. So we also want to let you know, give you a sneak peek as to what we're recommending for the government of Canada. And why we're doing this research just to give you some context is that, you know, this is a topic I'm extremely interested in. And we got some, we were able to get some funding from the Department of National Defense who I guess was also interested in this topic, particularly, you know, vulnerability disclosure and how does the government handle that? What that means is that we're going to be making recommendations to the government of Canada. Of course, they might, you know, these recommendations might fall on deaf ears, but we'll be making them nonetheless. And we hope that these changes are made. First of all, we think that Canada federal agencies need vulnerability disclosure procedures that follow best practices. Best practices include like deciding who's eligible and they can make clear. Some US vulnerability disclosure programs actually limited who could participate in them based on their citizenship and based on if they had received security clearance. If that's the case, then that's extremely important to be, you know, to be transparent about. And then we'll be diving into whether or not that is the norm as well in our report. It's also really important, for example, and this is one of the best practices out there, but we'll be advising the government on that. And it's also really important to just, to keep disclosed vulnerabilities separate from the equities, quote unquote, management framework. What that means is that if you disclose a vulnerability to the Canadian federal government, it should not be the case that a branch of the intelligence agency is able to withhold that information for potential offensive use. There's a few reasons for that. The most obvious is that if a person knows of a vulnerability and the government decides to withhold that, it actually puts the government systems at more risk than if they were to actually patch that. And this is the case in the US and in the UK. These are the jurisdictions where they explicitly say that if you disclose vulnerabilities to them, they won't be part of that framework where maybe a military agency or the intelligence agency can withhold that information for their purposes. We also believe it's really important to disclose information about the vulnerability to the public wasn't repaired, again, for obvious reasons. And this is just an emerging best practice that we think Canada needs to learn from. And it's clear that Canada needs a legal policy framework for vulnerability disclosure just as they have in the Netherlands and in the US. It's really important that there be clarity around when the anti-hacking law applies to Canada and when you can consent to engaging in activity that would exist outside of the framework of you being guilty of doing something, for example, simply because you did something that constitutes hacking activity. So here are those sort of best practices as mentioned. We think it's really important to be clear around eligibility, the submission and verification process. So I've heard in the course of my research that you can submit a flaw and then the entity will say, oh, this is actually not a flaw, but then they decide to patch that anyway. So if that's the case, A, well, organizations should not be doing that and they actually need to be clear about how they decide to validate the flaws and the information and reports they find. There also should be some restriction and clear expectations around what can be hacked on, so in terms of scope, and then what is also allowable in terms of hacking activity. So there's a list of things you cannot do when you engage in the vulnerability disclosure processes in the UK, in the Netherlands and the US, such as you can't engage in social engineering, you can't engage in DDoS attacks, for example. That's also an emerging best practice to find to give credit and recognition to people with disclose flaws because that incentivizes them to do that. Of course there are risks around creating a sort of market for flaws when you pay people, but that's something to consider and of course the risks that come with that. And then of course, public awareness is a really important best practice that has emerged in the course of our work. So please keep your eyes peeled for our report and we're happy to give, take your feedback. You can reach us at email, email us at below and you can find them online and we are excited to answer your questions in the Q&A period. Thank you for your time everyone.