 Our speaker will tell you about the status of coordinated vulnerability disclosure worldwide with a special focus on the Netherlands. And he has several years of experience working in this process, which we will talk to you, tell you all about it. Jeroen van der Ham is a researcher at the National Cybersecurity Centre in the Netherlands. He is organizing Science4SHA for this festival. So maybe we can give him a short round of applause just for organizing Science4SHA. His interests include ethics and coordinated vulnerability disclosure. And if you meet him for a beer, please ask him about the performance of the Dutch postal services leading up to this event. He has some great stories about that. But for now, please give it up for Jeroen, who is going to talk about Coordinated Vulnerability Disclosure. Thank you, thank you. So yes, my name is Jeroen van der Ham. I work for the government and I'm actually here to help to tell you how the government can help you. Yes, I'm really serious. So first off, who here has done disclosures of security vulnerabilities? There's a few hands there. And who's from the Netherlands? There's a difference in the set and that affects you. And I'll be talking about how we've been doing vulnerability disclosure policies in the Netherlands. And also, I'll have some good news about international developments and can tell you more about that. So first off, we all know that everybody has vulnerabilities. And the important thing, the important message that I'm trying to convey is that what's most important is how you're able to receive them and act on them. Because that is what affects most of the security in the process. So we have vulnerability disclosure. The most common term for the process these days is Coordinated Vulnerability Disclosure. The idea is that you work with vendors to fix vulnerabilities before they are disclosed. There are a couple of phases. You report, there's triage, there's development of the fix, and then there's the deployment of the fix. And it's important for organizations to realize that there are these four phases, but it's also important for the disclosers to realize that there's these phases. And the Coordinated Vulnerability Disclosure happens before this whole process. There has been a disagreement, there has been debate, there has been some agreement on disclosure of bugs in the last 30 years. We've been doing this for this long. First there was full disclosure, there was discussion, and now we're at responsible or coordinated. And we're seeing some more trends towards Coordinated because it puts less of a value on the behavior of the disclosure. And of course there's a lot of discussion, what kind of information to whom and when. There's also a lot of discussion about how long to wait for the vendor to actually fix stuff. These are all still open questions, unfortunately, and I think the debate just has to evolve so that we can actually find answers to these questions. The governments didn't pay that much attention to Vulnerability Disclosure in the last 30 years until everything became a connected computer. And now in the last few years we're seeing even more developments, now that transportation, healthcare, manufacturing. We're seeing self-driving cars, we're seeing pacemakers that are vulnerable, we're seeing lots of ICS stuff that is completely vulnerable. And you're also seeing some kind of liability for software systems, or at least there is some discussion on it. And well, yeah, the examples, there's even toilets that have been disclosures on toilets, I'm serious. And there have been national and international efforts to actually discuss Vulnerability Disclosure and how this affects the disclosures. I'm happy to report that there have been many, many international involvement initiatives and guidelines. The first one on the left is actually in 2013, the NCC in the Netherlands, they published a guideline for how to create and publish a Vulnerability Disclosure program, still called the Responsible Disclosure Program, and it included a way for hackers to act, to do their research, to figure out what the vulnerability was, how to report it to the vendor or the affected organization in such a way that they would not be prosecuted, either by the organization or by the government, even. There's also been a standard, it's published by the International Standards Organization, ISO 29147, and it's actually available for free. So just at the beginning of last year, it was made available for free. It describes the process of reporting a vulnerability without putting any, it doesn't say much about a prosecution, but it at least describes the process of what you should do when you receive a vulnerability disclosure. The other standard 30111 is a description of what the internal process should look like when you're fixing a vulnerability disclosure, or when you're fixing a security vulnerability. That one is unfortunately not free, but it's mostly aimed at organizations and not at disclosures. There has been initiative called at the Global Conference on Cyberspace, the GCCS, that was in 2015. We helped prepare a document there describing all the issues around vulnerability disclosure. The European Network and Information Security Agency, the ANISA, also looked into all of these issues. They identified this as being a very important issue, and they also published a statement describing all of the efforts going on and saying that this is actually a good thing. So that's mostly European. Over on the other side of the pond, the NTIA is a department that's also the department that was involved in regulating or governing the Internet, the ICANN function. But another part of the NTIA was actually talking to people. They did a multi-stakeholder process. They invited security researchers, they invited organizations, bug bounty companies, government, everybody, and they talked about how to do vulnerability disclosure and what the challenges were in vulnerability disclosure. And they published actually four different documents saying how to do vulnerability disclosure. They did a research on what kind of people and what kind of attitudes the security researchers have. This was actually a very awesome study that I have a graph at the end of the presentation to show you what the results are. They looked at companies that did vulnerability disclosure and there was a fourth one. Oh yes, they also described a more complex process in what happens when you have to do a coordination between several different organizations when they are affected by the same security vulnerability and they have to coordinate to all fix at the same time. And just in time for my presentation here, you wouldn't expect it, but the US Department of Justice has just published a framework document last week on how to do vulnerability disclosure and they even included some statements saying how you can frame this in the same way that the Netherlands did is they created some sort of legal loophole that companies can actually make sure that the disclosures are not prosecuted. The Department of Justice did the same thing saying this is the stuff that you can put in your policy to make sure that the security researchers are safe from prosecution. So I think that that is really an excellent development. Since the introduction in 2013 in the Netherlands, it has been a huge success. This is just a small selection of companies in the Netherlands that have a responsible disclosure guideline published and it ranges from security companies like Fox IT, ISPs like Access Roll, but even DIY stores, banks, municipalities, water companies, news blogs, any kind of organization is represented and actually has a responsible disclosure program. And these companies, they actually are very positive about this development and they even went so far as to come together themselves and they wanted to express their enthusiasm and they wanted to publish their enthusiasm about the coordinated vulnerability disclosure and they wrote a manifesto saying these companies, we support the idea of vulnerability disclosure. We want to promote this idea and they got on stage during the EU presidency of the Netherlands last year and put out to the world that they were supporting this idea. Meanwhile, we also have a big development. The vulnerability disclosure programs have become so successful and so popular that there are actually companies springing up right now and filling the void in helping other organizations do vulnerability disclosure. There's Hacker One, there's Bug Crowd, there's Synac and there's Zerocopter who are represented here today. And these companies help other organizations set up bug bounty programs or just simple vulnerability disclosure programs. And I think that that is a very important development. Let's see. Oh, I wanted to point out, if you have questions at any point, I can take questions during the talk or if you have questions about vulnerability disclosure policies, I'm happy to take them. So I've been talking about a lot of positive developments. There have been unfortunately also some negative developments and they also came from the Netherlands. In the town called, they started actually in the town called Wassenaar. They don't actually reside in the town of Wassenaar anymore, but they're in Vienna these days. And this is actually the Wassenaar arrangement for export controls of conventional arms and dual use goods and technology. So this is a whole mouthful. What does that mean? This is the way that 41 governments around the world, they talk to each other and they agreed on a list of goods that should be controlled because these goods can be used for both good and for military purposes. And you can think about stuff that makes very nano-chips or even aluminium tubes that could be used both for construction but also for making rockets. Chemical goods that can be used to make explosives or any kind of stuff. And we should actually know about this because this document is what started the crypto wars in the 90s. And actually it's still there. What they introduced back then is they, so the process is they agreed we have to sync on goods that are dangerous or for dual use goods. And the idea is that if you make this kind of stuff and you want to export it to a different country, even countries that are in the Wassenaar arrangement and talking to each other, then you have to apply for a license. And you apply for a license, you go to the government saying I want an export license and six months later you get a license to actually export it to a certain organization in another country. So for a certain client, you get one license. Or if you do your paperwork right, then you can get a big license saying you can export to these kinds of organizations in this list of countries. So you get more of like a blanket agreement. But in the 90s they introduced this. And this says that any symmetric algorithm with a key length in excess of 56 bits. Or an asymmetric algorithm where the security of the algorithm is based on one of the following factorization of integers in excess of 512 bits. 512 bits. If you do more than 512 bits, you have to apply for an export license and the government has to judge whether they want to allow you to export this or not. What happened in the crypto wars is that this is actually still there. This is still current law. But they put in an exception saying that if this is open source, if this is freely available, or this is an academic research and published, then you can actually export it because anybody can get it and it doesn't make any sense to export it. So we have crypto because everything is open source. If you have closed source crypto, you have to apply for an export license. This is finally being reconsidered. Yes. Okay, I'll repeat the question. The question is, in which countries does this apply? You can look at the website wasanar.org because it's 41 countries and I don't know all of them. It's most western countries. Actually, Russia, Ukraine are also in there. I don't think China is in there. Israel has a special status. They're part of the agreement, but they do follow it. They do implement it. So it's most countries that you normally deal with. So why is this relevant? Well, this is relevant for vulnerability disclosure because some time ago they introduced this clause that says systems, equipment, components, especially designed or modified for the generation, operation or delivery of communication with intrusion software. So anything that has to do with intrusion software and then intrusion software and also technology for the development of intrusion software. Then what is intrusion software? That is software specifically designed to avoid detection or to defeat protective countermeasures. So if you break security, then it's probably intrusion software of a computer or a network capable device. And to do extraction of data or to modify the standard execution part of a program. I have no idea what the standard execution part of a program is. And if we wouldn't be able to establish what the standard execution part is of a regular program, then we'd be out of a job because then the security problem is solved. So in essence, this was put in by people that made fancy statements that sounded good and seemed like they were talking about security when actually they were not that well informed. And they said hypervisors, debuggers, etc. Yeah, they might follow this kind of definition. So they're excluded and asset tracking and for some reason smart meters. They're excluded as well. I have no idea why. How does this work? Oh, let's see, how does this work? This was actually introduced in 2013 and the reason leading up to it was the whole situation around the Arab Spring where all of the governments were using spyware to monitor on the dissidents that they were causing the Arab Spring. So all of the countries said they're using all kinds of software that we don't want them to use and the only way that we can actually stop them from using that is by using export control. So they went to the drawing table, they didn't bother to consult with all of the security people and they just made legislation and said, yeah, this sounds good, so we'll do this. So in 2013 they introduced the definition of intrusion software. The deal is that the EU implements this as regulation, one year after an agreement in the Wassenau arrangement because all of the European countries are a part of the Wassenau arrangement. They don't want to do another discussion, so usually this is just hammered into law. So on the 31st of December in 2014 there was an update of the EU regulation and it's still there. So this means that it may actually be that vulnerability disclosure is covered under this list of intrusion software because the whole idea of technology, I didn't explain that, but technology for the development of intrusion software, that means anything that you need for the development of intrusion software including knowledge. So the transfer of knowledge, like I'm talking here to you, because you are of different nationalities, this is export actually. And if I were to describe a security vulnerability, and so the previous talk was a description of how to own CPEs, so modems, and he included a POC. And if you read the definition in a certain way, if you could interpret it in a certain way, he would have actually needed an export license in order to present it here. So that is something that wasn't well received in the US actually because the US was a little bit late in implementing this. And in the summer of 2015 for reasons that there was some disagreements within the government of the US and the Chamber of Commerce, the Department of Commerce actually made the most wildly strange interpretation of the Wasunar rules. So they included the terms zero-day in actual legislation, hacking, all kinds of different terms. So they included the whole worst case interpretation of the Wasunar arrangement. And then all of the technology companies that did vulnerability disclosure like Facebook, Google, all of these companies in the US, they went apeshit. So there were thousands of responses on the consultation of the implementation of Wasunar. And what resulted was actually that the Congress in the US told the government to go back to Wasunar and say, you have to re-discuss this. We're not going to do this. It took a while. Unfortunately the whole process in Wasunar is very, very, very long. It took a while and it looks like this may be resolved in Wasunar at the end of this year and then it would be solved in the EU at the end of next year. Meanwhile, the Dutch government holds the interpretation that vulnerability disclosure should not be covered by the idea of intrusion software. So in the Netherlands you're safe. I think in most European countries you're probably safe, but still they could go back and bite you with it. I've been talking about vulnerability disclosure a lot. The whole idea of the vulnerability disclosure in the Netherlands is that there's a couple of building blocks and how you do this. And the idea is on one hand you have the promises of the organization, the CVD policy. And on the other hand you have some responsibilities or guidelines for the behavior of the researcher and hacker. And they have to work together and if they do then you come up with an optimal process that the hacker can securely notify an organization of a security vulnerability. The organization can fix it and then in the end the researcher, once it's fixed, he can publish and he can go out the world or presenting at conferences, etc. The Dutch Public Prosecution Service actually thought that this was a very good idea. And since the implementation in 2013 there have been two cases, two court cases, where the idea of responsible disclosure or coordinated vulnerability disclosure has been named. In both cases the organization affected did not have a vulnerability disclosure program in place, but the judge still used the guideline as a measurement stick for the behavior of the hacker of the disclosure. And I'll go through the cases, so before I go through the cases I have to stress that there has been no case whatsoever of a company that had a vulnerability disclosure program in place against a security researcher. We don't know of any. And the Public Prosecution Service is actually very happy with this idea because it saves them a lot of work. So the cases. There are two. The first one is the Groene Hart Hospital in the Netherlands. They had a vulnerable FTP server with an easily brute forceable administrator password, in green 2000. The hacker found the vulnerability, the vulnerable system, and he informed the journalist because he was afraid of reporting it directly to the organization. But we have some protection of the press, so a journalist is considered a reasonable source as a way of disclosing this to a company. The journalist informs the hospital at 10 a.m. on, I believe, a Saturday morning. He actually published the story at 3 p.m. in the afternoon. And this was because the company in question just had a crisis management training just a week before. And so they got this telephone call from the journalist. They went into panic. They started their crisis management stuff because they had just had the training and they knew what to do. And one of the first things that they wanted to do is put out the press statement. And of course the journalist wasn't happy because that would ruin his scoop. So when he heard that the company was going to issue a press statement, he just published his story at 3 p.m. So that's why he published, that's why there was a very short disclosure timeline. After this happened, the hospital reports the case of the police. The hacker in this case, he used the port scan tool and he had access for at least two weeks. He was obfuscating that access by using a VPN. He probably retrieved the password hashes and brute-forced all of the other passwords or at least tried to do that. He shared some of the credentials that he found with other people. So he bragged to them on chat and IM and et cetera and he shared the credentials. To go even further, he installed malware on one of the servers to get persistent access, but he was using his own IP for that, not a VPN. So he made a mistake. He downloaded multiple medical files from the organization, the hospital, including those he was looking directly for some of the famous Dutch persons and downloading those files and looking at them and maybe even sharing them. And he sent screenshots of those files to the journalist. Finally, in his defense, he states that his actions were done in the public interest. When the judge looked at this case and said yes, the public prosecution service has a discretion to prosecute this because the way this works is the public prosecution's office has issued a statement saying we won't prosecute in this kind of cases when people follow the responsible disclosure guideline. So one of the first statements that the judge made is yes, these actions are so... are blurring the line of what is acceptable that you can actually prosecute and try to figure out whether he crossed the line or not. The judge will have to judge whether the actions of the reporter, the ethical hacker, were proportionate and subsidiary. He emphasized in his verdict that revealing security vulnerabilities can be in the public interest, especially when we're talking about medical files. He also found that there was no other way for the hacker to discover their vulnerability, which also meant installing the malware was actually acceptable to show that the hospital had very weak security. The hacker accessed the data and downloaded the data multiple times and that is unacceptable and not necessary to show that you actually had access. You don't have to do it multiple times and you don't have to download data of famous people and you shouldn't share that with others. That's the whole idea of coordinated vulnerability disclosure. In the end, he was sentenced to 120 hours of community service. So this was a reasonably light statement for what he actually did. The judge said this was not following the ideas of the responsible disclosure policy. The other case involved this famous hacker person, Henk Krol. He's not actually a hacker, although he has talked at hacker conferences at some point. But he's actually a politician and he's a journalist. But in this case it was also about a medical organization, Diagnostik for you, which did dealt with psychological research about people. And in this case, there's a reference there also if you want to look that up on the internet. The disclosure is a politician and a journalist. Like I mentioned, these are Henk Krol. An acquaintance of him discovered a vulnerability. No, he discovered the login and password of a medical system. He just shoulder-served it and logged in and looked at records and all kinds of other things. So he did some reconnaissance. He prints a few files. In the first instance, when he printed those files, he anonymized them. And he calls the help desk of the company saying, yes, you have very poor security and you should do something about it. The help desk has no idea what's going on. They are completely caught by surprise. They have no idea what responsible disclosure is. They have no idea what security stuff should be. They have no direct line to the security team. So they actually did something sensible and they said, please write a report to us and send it to us and then we can figure out what's going on and what we can do. The disclosure is actually not happy with this. And he thought that they should have acted more and more decisively. So he calls a local TV station and tells them about his findings. He actually goes on camera with a few journalists demonstrating that he can log in and on camera showing medical files that he has access to. So the verdict in this case, the judge stressed again that the disclosing vulnerabilities of sensitive medical system can be in the public interest, in the general interest. It was reasonable to test these initial findings in practice. So you are allowed to log in and do some reconnaissance. It's defensible that you print medical files and anonymize them to show that you've actually had access to demonstrate to the organization that you're not talking shit. And the judge used a three-part test in light of the Article 10 of the European Convention on Human Rights. The freedom of expression and information. The disclosure should act in the general interest. The disclosure should act proportionately and he should act in the least invasive way. And portionality, it wasn't necessary to review the files with a journalist on camera. He just crossed the line there. Invasiveness, there was also no reason to go to the media directly. There was no big vulnerability that everybody could exploit because their shoulders served the credentials. And there was no indication that other people had access to those credentials even though they were not that complex but still. And the judge also thought that the suspect should have taken more effort to contact the responsible partner. Because he's a member of, he was, is again a member of parliament. And he's a journalist. It's reasonable to expect him to do, to take more effort into actually contacting the organization and getting the right people and to actually talk to them and get this fixed. The result was just a $1,500 fine. But it still was a statement saying you crossed the line and this is not acceptable. And it was very helpful to us to say that the judge actually very clearly identified where he crossed the line. There are multiple, multiple cases where companies are actually doing this in a positive way. And I've highlighted one because it shows that the company in question did a complete turnaround. It's the Dutch telco, the KPN. They've actually been owned very badly at some point and that caused them to completely think about their security, rethink their security and they actually published a vulnerability disclosure program as part of their complete rehaul of the security. And one of the things that happened is that they actually got a report that was very successful because two hackers found vulnerabilities in CPEs, the modems. These were widely used. It may have been the vulnerability that the previous speaker was talking about, I don't know. But the vulnerability could be used for DDoS attacks and to gain access to the CPEs and to actually intercept data of all of their customers. They give remote access. The hackers in this case acted very cautiously and they only tested this against their own equipment. So they didn't test it in any of the third-party equipment. And they reported the vulnerability to KPN. KPN in this case takes up the report. After doing some initial research, after acknowledging the initial report, they actually invited the hackers to the office because the actual vulnerability seemed to be very complex and they couldn't really understand from the report what was actually going on. So they invited the hackers to actually give a presentation to the security team on how they did their research and what they actually found so that they could actually start fixing it. They managed to fix this within a very short amount of time. The hackers were awarded with KPN goodies and they were also allowed and even sponsored, I think, to go to a conference. And I think this was hack-lew that they actually presented this. And KPN, furthermore, used this as an outward statement highlighting that they took security seriously and they invited others to do the same. They released a press statement and made a video with all of the hackers to actually promote this idea. And KPN is also one of the partners that was in the manifesto that I mentioned earlier. So that is basically what I wanted to talk to you about. One of the last things that I want to highlight is I mentioned earlier that there was this research by the NTIA about research and motivation. They had about 400 people who actually responded to the survey. And one of the questions was what motivates you to actually do the research? This was a multiple-option answer, so the answers add up to way more than 100%. 14% of the researchers wished to remain anonymous. Only 15% expected compensation for their work, even if the company doesn't have a statement saying that they actually pay for this. 53% expected least an acknowledgement and 57% they just want to help. They want to help fix stuff, even if they're not paid. And the most important one is 70% of disclosers they expect to be regularly communicated because they can't see what is going on inside of an organization when they're fixing stuff. So the organization should really reach out to the disclosers to keep them updated about the process and stuff that they should do. The key takeaways that I hope you got from this is there are governments that are taking vulnerable disclosers seriously. There are more and more governments in Europe that are actually doing this. Belgium has issued a statement that they want to do this. France has some kind of legislation. The Italians are actually thinking about implementing this. There has been talk about this in Hungary. And like I mentioned, the US also has now an official statement on how to actually do this. The court cases in the Netherlands have shown that if a company publishes a vulnerability disclosure program, or even if a government states that they support the idea of vulnerability disclosure, it doesn't give you security researchers a carte blanche in doing anything and completely owning the organization. There are limits to what you can do in coordinated vulnerability disclosure. And in the Netherlands we see that more and more organizations actually really appreciate this idea and they cherish the involvement of you, the security researcher community. So with that, I thank you very much for your attention and I'm happy to take your questions. We have about 20 minutes for Q&A, so please line up at the mic and take it away. One thing that I want to add, sorry, so the guideline that we made in the Netherlands has been published in 2013. We're actually reviewing that document because it's a little bit dated and we're reviewing it, rewriting it, and we hope to publish that somewhere at the end of this year. So if you have any comments on how to improve this, I'd be happy to hear them just after this talk. My credentials are, the ways to contact me are down there. Thanks for your talk. I mainly want to go into the Dutch issue. Actually, if a company doesn't have a responsible disclosure policy published, I as a security researcher am not covered by the law, I can still be sued. Shouldn't we fix that? I mean, it didn't happen yet, but it could happen, right? Well, when I wanted to show with the two court cases that I mentioned, the two organizations didn't have a vulnerability disclosure program in place and the judge still used the ideas of the vulnerability disclosure program as a measuring stick for the behavior of the security researcher. So if you act within the guidelines of the vulnerability disclosure program, then the Public Prosecution Office doesn't see any reason to prosecute you. Yeah, although in the current responsible disclosure policy, it's like that I cannot download any information from a server. And actually in this court cases, they went a bit broader than that. So shouldn't that part be fixed? I recently found a semi-open FTP server and I thought, yeah, what's on it? Let's download it. But then I'm already in violation according to the responsible disclosure and actually continue trouble with the company. Yeah, well, the idea is that you do the minimum possible to show that there's a vulnerability. So for an FTP server, then if you just do a directory listing and show what kind of files there are, and if you can already guess by the file names what's in those files, then you can probably use that as a statement saying, here, I have access to your customer database and that's not good. For, I mean, for databases, if you get like a MySQL access or something similar, if you just get the row headers, then you show that you have access without downloading any sensitive data. Yeah, okay, okay. But yeah, and shouldn't we shield the disclosure, the security researchers. So maybe have an organization set up with national security. So we actually tried to do this with the policy. If you still feel that you're not protected enough or that the company, if you think that the company will take aggressive action against the security researcher, then you can contact us, the NCSC, to do the disclosure. You only do this for vital sectors, right? No, I mean, for vital sectors and central government, we are the first point of contact or for most of the companies that we are actually the first point of contact and if something goes wrong, we're responsible for dealing with the issue. If you have found the security vulnerability in any other kind of organization and you reported it to them and you at least tried and it doesn't work out, then we can try to help. Okay, because yeah, my finding is and that of friends that we find a lot of shit and every time we report it, it goes bad and we just stop reporting it. Yeah, I can understand. I mean, we have limited capacity. We can't fix the security of every company in the Netherlands. But if it's very serious, if it affects a lot of people, then we will at least try. Thanks. Thanks. It's good to know that things are relatively civilized in the Netherlands. Can we give an overview of what is happening in other countries? All the rest of Europe, the United States, China, Japan, more economically developed. And the related question is what is a reasonably safe conduct in other countries because the laws and practices vary a lot. So how do you stay on a safe side if you are doing this in other countries and also third question, what happens when the hiker is in one country or a security researcher is in one country and the company is in another country? Yeah. So first of all, I'm not a lawyer and I'm not an international lawyer. There has been a publication so far it's only been in Dutch and we're trying to get it translated of what happens if you're a Dutch researcher and you try to report the vulnerability in Germany. And it could be that if the company in Germany is really pissed off, then you could be in danger of jail time in Germany. But that's very special circumstances. I don't think that that's actually going to happen, but in theory it could happen. And something similar like that could happen in other kinds of countries, especially in Europe where there's all kinds of agreements between different countries on how to deal with these international issues. But to come back to your first question, the developments in other countries is that they are seeing the light that the path of the Netherlands has shown them. I was completely surprised by the US Department of Justice putting out this framework just last week because I thought that they were really not a fan of this whole idea, but they actually did and they actually outlined very good ways of companies to actually safeguard security researchers from prosecution by the companies but also by the government. And I mentioned before the Belgians, the French, Italy, Hungary, Estonia, those kinds of countries are seeing the light and they're doing stuff. In relation to China, it's difficult. There used to be some kind of disclosure platform there which had a pretty progressive fixed publication date of 90 days. So you reported it there, the organization was notified of the vulnerability and after 90 days, no question, the stuff was published. The weird thing is that last year that organization that was responsible for maintaining that platform and the platform with it disappeared off the map. It seems that they pissed off the wrong people and disappeared. I have no idea what happened. But similarly, on the other hand, the Chinese network manufacturer, Huawei, they actually have a disclosure program in place. So even in China, this idea is accepted and they seem to be making the right movements. And there are others here who have more experience disclosing internationally. He's sitting right there and he may help you. If you want to discuss that, I think he'd be happy to tell you his experiences in doing that. Does that answer your questions? Yes, more or less, I just can't try to add my understanding that if you want to stay on a safe side, you don't want to hack anybody else's network. If you want to show that a piece of equipment has a vulnerability, buy that piece of equipment and hack inside your own lab. Then you are on a safe side. Yes, but even then... Because no data is compromised. But even then, that doesn't mean you're protected in every country in the world, unfortunately. Or even in that case, you can successfully get sued and held responsible. That's bad. I'm afraid that's the case. Next question, please. Hi. When a company has a vulnerability disclosure program, it explicitly states that you are not allowed to do this. For example, you are not allowed to use automated scanners. And you use them and find vulnerabilities. Is it legal or isn't it? It will depend on the case. And this is a very lawyerly answer. But it will depend very much on the consequences of what had happened. Usually companies put that in place because they're afraid that their equipment is going to fall over if every hacker around the world is going to scan their whole network. And it will. If everybody starts scanning everything, then things fall over. If you do that in a sensible way and you do this in the least invasive way possible, then I think you're probably on the safe side. But you shouldn't do this everywhere. Okay, but you are on the safe side. But imagine the company is still mad and reported to the police. Yeah. Well, then the same thing applies, like I mentioned in the court cases. If you acted responsibly, then the public prosecution office is probably not going to prosecute. Yeah, that's the best answer I can give you. Okay, thank you. Next question, please. The explosion in the Netherlands was, yeah, happened from a lot of different fronts at the same time. Media attention and politicians. Can you put up the mic a little bit? Yeah, awesome. There was a lot of attention on the subject from different fronts. Media attention, the civil servants like the NTSC or also big companies who gave a good example. Is there anything from different crown to do to improve the situation? So, yes, so we should be really thankful to you also because you did a lot of hard work for the situation in the Netherlands. But I think that that process also showed that it takes a broad support within a country to actually make this into a policy. And I think that this actually happened in the U.S. as well, where there are a lot of companies in Silicon Valley which already had disclosure programs and even bug bulging programs. They're paying researchers to do this. And they were saying we would really like this to be legal because we want more of these kind of reports. I've had discussions with the policy departments in the Netherlands and all kinds of other lawmakers about how we would approach this in Europe. And I've advised them against making this regulation in Europe. Making vulnerability disclosure programs mandatory in all of Europe is not going to make this into a better place. What really should happen is that we talk to the governments, that we talk to all of the international corporations and organizations, and we talk to the security community to make everybody receptive to the idea that this is possible. And once a country reaches a certain level, then they will automatically start looking to other countries that are doing the same thing. Hopefully look into the documents that we published, that the GCCS published, that the American government published. And they will see that there is a lot of help already out there in how to do this and how to implement this and how to make these kind of processes so that it actually works in the right way. And we now have a couple of years showing that this is actually very successful and that everybody is happier now that this is possible. We have time for one more question. So if anyone's interested, I see a hand running. And please eat the microphone. The people watching the stream will be very thankful. Hi there. Hi. I do quite a lot of physical vulnerability research and it seems that the physical world is lagging well behind the digital world. There's no such thing as vulnerability disclosure programs, nothing like that, no bug bounties. I was wondering, is there any way we can sort of push those, push the physical security industries the same sort of way as the digital security industries are going? Yeah. How do you drive that, you know? Because a lot of the companies don't want to know. No. The physical companies that are doing software are usually lagging behind in all of the developments and they are unfortunately also behind in the developments of vulnerability disclosure. One exception is car manufacturers. The car manufacturer Lobby actually put out a statement saying, we take responsibility for all of the software that's in cars. And that's actually a pretty big step because there are hundreds of vendors supplying equipment, parts and software for cars. And the car manufacturer taking responsibility themselves for coordinating vulnerability disclosure is actually a big step because they own the process and they're actually very receptive to it. And one of the things that happened in the US before the idea of the vulnerability disclosure is they have the Digital Millennium Copyright Act that's loathed here. But one of the good things that's in there is that every once in a while it's reviewed and one of the reviews happened last year and they put in an exception for cars and medical devices. So as a security researcher, in the US you are already allowed to research cars and medical devices. The FDA in the US has also taken a big step saying that if a company, a medical company becomes aware of a security vulnerability in a medical device and they are able to fix it within 90 days, they don't have to do a recertification. So those kind of nudges in policy or in regulations can really help make other kinds of organizations receptive to the idea of vulnerability disclosure. And I'm seeing that more governments are starting to realize that this is actually a way to go. That we have to help other kinds of organizations that have to do with physical devices to make them more receptive to vulnerability disclosure because physical devices tend to be a risk to people when they become vulnerable. And that becomes very serious very quickly and we need to get that shit fixed. Thanks. Thank you, Jeroen. Let's give Jeroen a warm hand.