 So this panel is the hackers, the lawyers and the defense fund. I am Harley Geiger and I am your moderator for this panel. There is unlikely to be a hacker at this conference that has not contemplated at some point the potential legal trouble that they may face as a result of good-faith security testing. And many people have experienced this directly themselves and many more will experience it at some point in their hacking careers. So one of the things that makes the security community so effective is that it is always pushing the boundaries of permissible behavior. And by pushing that boundary, it helps to find that boundary, it also creates space to innovate and to simulate criminal attacks so that we can better prepare ourselves to defend and preempt those very attacks. This is how we fix vulnerabilities before malicious actors get a hold of them. In many ways the laws recognize this now and are working to accommodate this kind of security research activity. But there, so we're seeing more coordinated vulnerability disclosure in laws like in the United States, at the state level, abroad. However, we are also seeing that the hack continues to face legal threats from various sources like the governor of Missouri threatening a newspaper. And some of these are bogus or empty, but some are also quite serious. So what is it like for a hacker that faces legal threats and what sort of resources are available for security researchers that run into legal complications? So for this panel we're going to hear first from a hacker that has lived this and faced legal threats that is Miles McCain. And we'll hear from Charlie Schneider from Google about how the ecosystem benefits from the work of security researchers and that the chilling effect that legal threats can have. Third, we'll hear from Kurt Upsall. He is the associate general counsel at Filecoin Foundation and has provided legal representation to many security researchers. Then we'll hear from Hannah Zhao from the Electronic Frontier Foundation about the ways that EFF provides legal services to hackers through the Coders' Rights Project and impact litigation. And then lastly, we'll hear again from me about the Security Research Legal Defense Fund. This defense fund was recently launched and aims to provide funding for the defense of good faith security research. So one last thing before we kick that off, which is that nothing you hear on this stage should be construed as legal advice. You do need a lawyer. Lawyers are really great, but they're not sitting up here at this moment. So Miles, if you wouldn't mind, please introduce yourself and tell us your story. Awesome. Thanks, everybody. Hey, everyone. I'm Miles. I am a college student, open source software developer and enthusiast security researcher. Right now I'm just going to briefly share my experience receiving a legal threat for good faith security research, and then I'm going to zoom out and talk about some key takeaways that I had from this experience. But before I do that, I want to give a shout-out to Adi right there and the whole Stanford Applied Cybersecurity crew. If you want to stand up and just wave as I say this, Adi was one of my co-conspirators on the vulna disclosure that I'm about to talk about, and we would not have been able to stand up for ourselves in the way that we did had it not been for the amazing support of the Stanford Applied Cyber community. So thanks for being here. All right, so this is a fun story. Last October, a Stanford student startup called Fizz was getting super popular on campus. This was an anonymous social media app that claimed to be 100% secure. What could go wrong with something like that? Well, you know, me and a few security-minded friends, Adi, being one of them, we were drawn like moths to a flame when we heard that. And so one Friday night, we decided to evaluate whether Fizz really was 100% secure like they claimed. Well, dear audience, as you might be able to tell from me sitting here, they were not. Within a couple hours, we were able to gain full read-write access to their database where they were storing all posts and users and everything you could imagine entirely de-anonymized, 100% secure. All right, so at this point, we did what any good security researcher is supposed to do, which many of you have probably done, which is that we've responsibly disclosed what we found. We wrote a detailed vulnerability disclosure report. We suggested remediations. And we also proactively agreed to not talk about our findings until an embargo date, which was fair about in the future. We wanted to give them time to fix the issues. And we sent them the report via email. At first, they were super grateful. They said, hey, fixing these issues is our absolute top priority. And every couple weeks, I guess every week or so, they would send us an update. And then one day, they sent us a threat. This was a crazy threat. I remember it really vividly. I was just finishing up a run, and I looked at my phone. And my heart rate went way up. That is not what's supposed to happen after you finish a run. In the threat, they said that we had violated state and federal law. We had conspiracy to commit a felony. They also brought up the Stanford disciplinary code, which I thought was pretty odd. And then at the very end of their threat, they left a demand. Don't ever talk about this publicly. And if you agree to never talk about this, then we will not pursue legal action. Well, as you can probably tell by me sitting here, we did not agree to that demand, and I'll talk about why. But clearly, they wanted to scare us into silence. So what do you do when you get a letter like this? Well, I totally freaked out. I was angry and scared. I'm usually a try to be a pretty level headed person, but this was something different. But the very first thing that our friend Cooper, who was another of our co-conspirators said in our disclosure group chat when we got the threat, was stay calm, don't do anything fucking stupid. We're gonna need a lawyer. And if you can't tell from this advice, this was not Cooper's first time receiving legal threats. So we started asking for help in our network. And within a few days, we were connected to Kurt, who's sitting right here, and Andrew Crocker at the Electronic Frontier Foundation. And after talking with them a little bit, Kurt and Andrew generously agreed to represent us in our response to the letter pro bono. So we walked them through our disclosure and our documentation. And with their advice, we made the decision to not cave to the threats. Also, hey, I see a big crowd of AC people who just walked in. Love y'all. And so Kurt and Andrew drafted a response to Fizz, and they totally shut the threat down. And as an aside, the Stanford Daily published both the threat and our response. This is public, I really recommend you read it if you haven't. Anyway, after we responded, the Fizz team asked to meet and we were able to resolve the situation amicably. We pushed Fizz to proactively disclose the vulnerability to their users publicly, which they did eventually do. So taking a step back, getting a legal threat for our good faith security research was incredibly stressful. And the fact that it came from our classmates just added insult to injury. And I have three sort of key takeaways from this experience that I want to share. First, keep your research above board and really well documented. Ahead of time, think about what you're trying to accomplish, write it down, and make sure that you don't cross any ethical lines. A big reason why we were able to resolve the situation amicably was because we played by the rules. We didn't save or leak the data that we had access to. We didn't mess with anyone's account. We didn't cause any damage. And we kept really detailed documentation of absolutely everything we did. And that clean documentation helped us as we were writing our disclosure, and I imagine it also probably helped Andrew and Kurt as they were drafting the response. Second thing, stay calm. I cannot tell you how much I wanted to curse the Fizz team out over email. My God, but no, we had to keep it professional, even as they were resorting to legal scare tactics. Your goal when you get a legal threat is to get out of it, to resolve the situation amicably, to not go to jail. That is it. And the temporary satisfaction of saying, fuck you, while absolutely real, is just not worth giving up the possibility of resolving the situation amicably. And the third and final takeaway is get a lawyer. And I am not just saying that because I am on a panel with a bunch of lawyers. We could have tried to navigate this process on our own, but that would have been an absolutely profound mistake. They would have escalated. After all, they were banking on us being naive and falling for their scare tactics. But thankfully, Kurt and Andrew stepped in and really saved the day. Now, I recognize that not everyone has the ability to get the Electronic Frontier Foundation to represent them pro bono, and they don't have the ability to represent everyone who faces a legal threat. But there are an increasing number of resources available for good faith security researchers who do receive legal threats. We're going to be talking about those on this panel, and you should absolutely try to make use of them. Do not try to fly solo. Now, I will hand it off to Charlie, who is the head of security policy at Google and who has spent a long time on the other side of the vulnerability disclosure process. Thank you, Miles. So yeah, so as Miles said, I kind of am here to represent the other side of the equation, the bug collectors. And throughout my career, I've had the privilege of collecting bugs from skilled researchers all over the world. And my career has been spent exclusively in very, very large organizations who in theory hold a lot of power and leverage over hackers. And so at the Department of Defense, I helped found and manage the first bug bounty pilot called Hack the Pentagon, and I wrote their first vulnerability disclosure policy. Later, I worked at a financial institution where I led researcher engagement and their bounty programs. And now at Google, I get to work with our bug collector teams all over the company. And in all of these experiences and throughout, I've seen kind of firsthand the power and leverage that organizations like mine can hold over researchers. Even before we could found the, and start the Hack the Pentagon bug bounty pilot, I first had to kind of search through the massive Defense Department bureaucracy to identify who within the department had been machine gunning out cease and desist letters to anyone they could find who had been scanning any part of our slash eight IP ranges. Like talk about sending a mixed message as you're trying to get a bug bounty program started. And during the first bug bounty, one of our very top contributors to the program who I believe has continued being one of the top contributors to all the various kind of DoD vulnerability disclosure programs was an individual who ran a foul of US computer hacking laws. And he served his time and he got out and was in the first wave of people to raise their hands to contribute to government security in kind of the first bug bounty. And I think that took a lot of courage to have gone through what that person did and faced those risks and still come back to help us. And at Google, we see there as well kind of the power that companies have to chill security research. Sometimes we receive reports from researchers for bugs in other companies' products because those researchers are afraid of what those companies might do, their reputation on disclosure issues. And so they come to us instead. And I think that's really unfortunate and that's ultimately those companies' loss. And I think it's all of our loss if hackers can't contribute. You know, from a bug collector's perspective, we want hackers to be able to engage on an equal footing with us without fear of legal reprisal. With more kind of legal certainty in the equation, we can have more open and deeper collaboration and conversations. And ultimately from a company's perspective, we're gonna get more and better bugs. And each new report gives us an opportunity to close off an avenue of attack and protect users. But when vague laws and companies behaving badly allow kind of the creation of this chilling effect on security research, I think ultimately the only ones who truly win are attackers and it's users who lose. So that's why I'm super excited to be here and why I'm glad that the Legal Defense Fund exists now. And I think all companies and all kind of bug collectors should be glad it exists. So I'll turn it now to Kurt. Hey, everybody. Doesn't appear to me. Hey, everybody. That is on. Hey, my name is Kurt Opsall. I'm the Associate General Counsel for Cybersecurity and Civil Liberties Policy with the Filecoin Foundation. And I also volunteer my time with EFF as a special counsel and for many, many years I was working with EFF on the Coders Rights Project. I've been in this community trying to help by the legal situation since the Alexa Park days back in DEFCON 13 was my first one. We had an exciting time with security researcher, Mike Lynn at Cisco and some disputes over a program that had to be physically removed from the program book due to some threats. And I've really enjoyed working with countless people in the security research community and how to help them out avoid situations or deal with situations that occur. So I want to talk to you a little bit about some of that experience. So just to set the stage a bit sort of broadly speaking, there are two types or categories of legal issues that come up for security researchers. The first are issues around doing the research itself, legal issues that may arise because of, are you hacking onto someone's system? Will that be considered an intrusion? Is there a anti-circumvention issue arising out of the Digital Millennium Copyright Act? An end user license creamer terms of service will have a potentially contractual provision that might go on. If you're doing wireless research, it may involve getting contents of communication which brings up what the law likes to call wiretapping. And so there's a host of issues that can come up in the course of it and how you navigate those in order to minimize risk. And the second large category is the disclosure. And this comes out where if there is a vendor or a commercial entity especially who you are exposing a vulnerability, they may become sad and upset. Now these days it's actually fairly well established for technology companies that people who've been around this space for a long time, receiving bug reports, they may have a bug bounty program that's been going on successfully for a long time. And this has been a really great advantage to the community to have bug bounty programs so you can look at, see what the rules are, work with it, have points of contact. But it's not always the case. More old school industries, sometimes I call it first contact situations, the first time that they have been reached out to by the security community, say your security is broken, people can see all this info and sometimes they panic or they want to make sure this shuts down. Are they thinking of it from a sort of a calm's point of view, a communications point of view of trying to not have that communication. So the good news is for disclosing vulnerabilities if you disclose truthful information about a vulnerability that is strongly protected by the First Amendment, it is hard for someone to build a case around saying you should not say something which is truthful even if that is something that is embarrassing or awkward. But some of the tension arises is that these two issues can be interrelated. One of the issues is that if perhaps there was something was towards the edge of legal lines or can be construed that way in the course of doing the research, publicly disclosing it, well one, if done un-carefully, might involve admitting to things on stage that well, cannon will be used against you. But also even if you did play by the rules, you did it well, there's a threat of litigation over it and the possibility that you'll be sort of facing a very awkward situation getting these threats that may worry you not about the legality of the disclosure but that they may go after you for the underlying research. So what should hackers do in these kinds of circumstances? Well, a lot of it, first of all, have a lawyer with you at all times. But that is somewhat inconvenient and hard to do. So it's the question of when should you reach out and also what can you do to help mitigate some of these things as early as possible? And so when I talk to security researchers, some of the advice is very dependent on what stage things are at. If it's early on, you can minimize the risk in doing the research itself. So for example, if it's a hardware bug you're trying to find, if you own the hardware then that resolves whether you are intruding on somebody else's hardware. You'd be amazed at the number of machines that you can buy on eBay that if you wanted to test, say, voting machines in the early days, it was very hard to get cooperation from the voting vendors, but you could buy used voting machines on eBay. Someone was doing a test of some TSA scanning machines. Turns out those are available on eBay as well. So you can do a lot of that stuff. Wireless communications, well, you can minimize risk around wiretapping by just having your own communication that I say where you've consented or you and your buddy, but not doing it more broadly to strangers. So there's a lot of things that can be done early on, though sometimes our first contact with a researcher is after some of that research is done and it's about trying to navigate the path forward. But I would say one of the things you can and should reach out, if you have some questions about that, reach out to the Coders Rights Project at EFF and Hannah will talk a little bit more about that to avoid some of these problems. But at the disclosure stage, I would say probably the key thing to start out with if you're about to disclose is make sure that you get someone who understands security or is an engineer trying to establish sort of a geek to geek protocol communications, not going through marketing or legal, but with an engineer. And sometimes people in this community know somebody there or you can check the disclose.io has a list of points of contact, but if you can have an engineer, they probably will understand it. Even if it's an engineer working in an old school firm, it might be an internal advocate for like, yeah, this is embarrassing, but this happens and we gotta deal with it. But sometimes the only path forward is going through a more public-like comms channel or things get escalated, and that is a sign. A sign things might be going weird. That is an excellent time to reach out for a lawyer. If there are signs, if you're a lawyer is getting back to you, even if it's not a threat, if they're asking some weird questions, maybe you wanna talk to your own lawyer for a response. As Miles was saying, like document some things, especially if there are documenting things that show that you did the right thing, you behave properly, that you didn't mess with customer information. And also if they say nice things about how much they appreciated early on, save a copy of those emails later because in case things get weird later, it's good to have. And most of the time, these things can be resolved. And everyone goes away ultimately happy and there's some awkward moments along the way, but more often than not, it can be resolved. And even when there is this kind of, I say Barbara Streisand is your friend. The Streisand effect, as many of you I'm sure know, is when the news about the dispute gets more coverage to the issue then the issue they're trying to cover up would have gotten any way. And I think that helped to have the Stanford Daily publish the Streisand assistance response letters. And on my mind today is that many years ago, I think it was 2008, there was a presentation here at DEF CON about a subway hack, the Charlie Card and the MBTA. And the MBTA did get very upset. In fact, so upset they went to court, tried to get the talk shut down. They got a temporary restraining order. And that led to the talk being replaced to a press conference about how they had swelled speech about their security, which ultimately probably had more people in attendance than your average talk and got it into big news. So that's a strong incentive if someone is behaving rationally on the other side. That them going ballistic on you will actually hurt them far more than trying to resolve it amicably. But of course, not everybody is rational. And oftentimes we say that you can guess what a rational person might do, but your opponents in litigation are not always rational. And so that's when things step up to litigation. And one of the challenges is that while you can get some great pro bono services and I think definitely reach out to EFF, EFF can't represent everybody in litigation. Litigation is a much more time consuming and legal intensive process. It can be fun as a lawyer doing this, but it is not fun to be the person in the litigation. Even if you're going to win, it still sucks to be along the way, but it's good to have counsel on that. And so one of the things I like about the security literature, legal defense fund is in those circumstances where some money will help get an attorney that is where pro bono resources have been used as best they can, but are not enough to have that fund available. And hopefully like it's one of those things that hopefully never gets used, but having it available can help reduce the chilling effect that comes from these streets and the cities. Often nasty grams, Miles was just describing, it's just full of bluster and threats and some of that bluster is just trying to scare you. Some of the ridiculously short deadlines, things like that trying to induce panic. And part of that is the bullying that's going to after someone who is not represented sees the scare thing and just stops speaking about it even though they have every right to. And so having some of that reassurance available can help out and make sure that talks continue to happen. And so the good news on the MBTA is I was just seeing yesterday that some other researchers also found some problems with the Charlie Card on the MBTA and this time they were very nice to the researchers. They listened to what the vulnerability was. They talked it through politely and so things can change and get better. And it went from a bad first context situation to now the way we'd like to see vendors, researchers working together to solve this common problem we have of security. And one last thing before I pass this on, we talk a lot about, I'm a U.S. lawyer, talked a lot about the situation in the United States, but security research is a global phenomenon and is done all over the world. And one of the things that is cool about the United States is there's a long tradition of pro bono legal representation and a lot of lawyers, even ones who work at big law firms and such, see it as part of their job to give back some of their time to the community through pro bono work. But this is not a universal tradition and in many places around the world, your choice is, pro bono counsel is not readily available or not available at all. And so that is another reason it's good to have some funding available to try and help people who are in a bad spot somewhere outside of the U.S. So with that, let me turn it over to Hannah. Hi everyone, hi everyone. My name is Hannah Zau, I'm a staff, excuse me, I'm a staff attorney at the Electronic Frontier Foundation and I'm part of EFF's Coders Rights Project. So I'm gonna talk a little bit about Coders Rights Project, the kind of help that we offer to security researchers and the kind of counseling that we can provide for y'all. So first, EFF, I'm sure a lot of you are familiar with but it was actually founded over 30 years ago in order to defend developers and programmers. Legally, so Coders Rights Project currently continues that tradition of representing and counseling tinkerers and hackers and people who want to innovate while protecting their rights and promoting innovation. So if you are someone who is doing security research, what happens when you need help? How should you reach out to us? Well, there are several ways that we get our clients at the Coders Rights Project the first way is through intake, which is our email info, i-n-f-o at e-f-f dot org. You can email your questions there and say that you're seeking legal counsel about your issue. Those emails that we exchange thereafter will be privileged and so you can disclose a lot more information so that we can help you. Another way that we get folks is through informal referral networks. This means that people might say what happened with Miles, people are asking around, maybe they're in some sort of trouble where they receive a cease and desist letter or something like that and other people will say, well, I know someone at EFF or you should reach out to EFF and those folks reach out to us. And then the last way that happens all the time is here at conferences like DEF CON and other security conferences where people will come up to us after our talks or come up to the booth and say, hey, we have these issues and we can give out our contact information. So again, please do not talk about the possible felony that you committed in a room full of people, but we will have a private conversation instead about it so that we can ensure that your privilege is protected. We do get a lot of requests through intake. We are able to provide counseling for the vast majority of folks who reach out to the CodersRise project. And what that means is that we will sit down with you, we'll get information from you to make sure we have no conflicts of interest. Then we will get details on what you did, how you did it and what exactly your worries are and what your goals are. And we will tell you about the legal risks, the possible laws that could be invoked by adverse parties and the different things that could happen and whether they're more likely to happen, less likely to happen. It's hard to talking absolutes in the law, but we can definitely give you an idea of what your choices down the road can bring. So why should you reach out? Well, the earlier the better because the earlier you know, if you have even an inkling that may be there might be some legal problems down the road, you shouldn't wait until you get that cease and desist letter. Reaching out early is really helpful because we can lay out the path before you and you can make informed decisions based on your risk tolerance and your priorities about what actions you want to take and what actions you don't want to take because now you know what kind of risks those decisions can bring in the future. So earlier is always better. As was mentioned before, EFF doesn't always represent our coders rights clients in litigation and the reason for that is because EFF is an impact litigation firm and that means we are always either trying to make good law with cases or trying to prevent bad laws from happening with cases. And so that means that when we are thinking about what case to take, we think about things like the facts of the case, whether they are good, whether they might be in a good jurisdiction, who are the judges that are likely to listen to this case and whether those judges are friendly to the arguments that we're making. So those type of considerations come into play and the reason we can't take on everyone is because we have limited resources and there's the same Chinese that says you need to use the good steel on the blade of the sword. So you don't want to use it for the hilt, you don't want it to use it for the sheath, you want to use it on the blade itself. And because we have limited resources, we really want to make sure that we're creating the most impact when we spend those resources. Now, individual representation on the other hand means you want to fight the best for your individual client. So I used to be a public defender before I joined EFF and no matter who my client was, I had to defend them, I made the best legal arguments I could for their case. I don't think about things like are those arguments good for the community? Whereas with impact litigation, you think about are the arguments we're going to argue going to be good for the security research community as a whole? So that's another reason why we don't take on every one of our counseling clients as our litigation clients. But that's not to say it never happens. Just a couple years ago, Cara Caciliano, another member of the Coders rights team, as well as I, we represented another college student actually who was also a security researcher. He looked into the code of the proctoring software that his university used and he realized there were references in that code to two functions that Proctorio, the Proctoring software company, claimed that they don't have in their software and that they never do. So he posted the snippets of those code online and criticized the company. Proctorio took a bunch of different actions to take down the criticism as well as the code and he came to us and we began to counsel him on how to help him respond to the takedown notices so that his code gets, and his criticism gets posted back up on Pay Spin and GitHub. And later, as we were in the process of counseling him, we actually decided to take him on as a client in litigation and we filed a suit against Proctorio for what it was doing. And the reason that we did that was because that to this case had a lot of specific facts about it that was good. At first, we think that it's very important to protect security researchers when they expose hypocrisy and inconsistencies in a company's claims versus what its software seemingly is actually doing. And we think that it is a good for the security research community as a whole. On top of that, it also concerned various other issues that EFF is involved with, including student privacy and copyright. So, like I said, it's not impossible for us to take on our counseling clients as direct litigation clients, but it doesn't happen all the time. And with that, I will hand it off to Harleet. All right, so I'm gonna talk about the security research legal defense funds and sort of round us out. We may or may not have time for questions because of the sound issue earlier, so we're running a little bit behind, but we'll get to it if we can. I am Harley Geiger and I am a counsel at Venable LLP. I work on cybersecurity and privacy law. I've been doing so for about 15 years. And I'm also the coordinator of the World's First Security Research Legal Defense Fund. I'm gonna talk about that as a resource that security researchers facing legal threats are able to use. So, the defense funds is an independent nonprofit organization. The defense fund is intended to help individuals who face security threats due to good faith security research, sorry, I said security threats, legal threats due to good faith security research or vulnerability disclosure. The defense fund is not going to provide direct legal representation, so it's not gonna act as your attorney. You know, you have to find your own attorney. However, the defense fund will provide funds for legal representation if you meet certain criteria. So some of the criteria for being eligible are mostly, these are mostly centered around making sure that the defense fund is independent and acts in the public interest. So, one, the defense fund has an independent board. The board approves who can qualify for funds. The board is presently Amy Stepanovich from the Future Privacy Forum, James Dempsey from UC Berkeley and Kurt Opsol from the Falkland Foundation. Security researchers, second criteria, security researchers have to show some financial need. They don't necessarily need to be completely indigent, but in order to get community funds to be fair, there has to be some sort of financial need. And then, as I mentioned a little bit earlier, the defense fund aims to help individuals for threats related to good faith security research and vulnerability disclosure. So in that definition, loosely, is testing or investigating security flaws for the purpose of enhancing the security of the devices, the users, or the systems. The defense fund has to act in the public interest and so it's not gonna defend, even if you're normally a good faith security researcher, if you commit an act of extortion, or if you have deliberately harmed the system or exfiltrated PII, things like that, then you may not qualify. Militious attacks cannot qualify. So, for an example, as an example, an independent hacker finds a vulnerability in a smart product. They reach out to the product manufacturer to disclose the vulnerability. The manufacturer threatens a lawsuit against the hacker unless they sign an NDA about the vulnerability, and the hacker has an attorney, but can't really afford to resist the lawsuit. So the hacker can reach out to the Security Research Legal Defense Fund and ask for help. The defense fund will hear the background of the threats and the board will vote on whether or not to help the hacker push back on the threat of lawsuit. If the board votes yes, then it will help pay security and court fees for the hacker. So our vision is that the Security Research Legal Defense Fund will help deter unfounded or over-aggressive legal threats against hackers that are acting in good faith. We also expect that this will help cut into the chilling effect that security researchers face if they know that this resource is out there to potentially help them if they're acting in good faith. And then lastly, we view the defense fund as encouraging hackers to model good behavior because you will not qualify for the funds unless you are acting out in good faith and doing good faith security research. So a note on the status and how the community can help and then we'll see if we can take questions if we're allowed. So that the Security Research Legal Defense Fund has launched, it is operational. You can visit the website now at securityresearchlegaldefensefund.org or if that's too much, srldf.org. We have applied for 501c3 charitable status with the IRS. And of course the Legal Defense Fund is a fund and so the funding model for the defense fund is driven by community donations and so the community is in a position to help itself. Those donations are tax deductible and I believe Google has an announcement to make on the donations to the fund. Sure, yeah. So I actually have kind of two PSAs and one announcement. So first is a public service announcement for fellow corporate overlords in the audience, different companies consider finding a way to donate to the program. Second PSA is for researchers that if you are inclined to donate bug bounty winnings for any reason either for its own sake or because maybe by the terms of your employment contract you're not allowed to accept winnings consider donating to the funds. And I'm happy to announce that any researchers in our Google bug hunters program that opts to donate to the fund, Google will quadruple that donation. That's amazing. So beyond donations, the community can also help just by letting other people know that this resource is out there. We need awareness in order for this to best serve the community. So that's it, I hope that you enjoyed this panel and I hope that you find it inspiring that the security community has these champions like Kurt, like Hannah, like Marsha Hoffman and others who have been defending security researchers for so many years and hope that you find it inspiring that you have these assets like the Coders Rights Project or the Security Research Legal Defense Fund and key supporters like Google who are out there trying to ensure that good face security research helps the community. I think we should take pride in knowing the progress that the community has made that together we're strong and we will continue to secure the world. If we can take one or two questions before dissipating, is that not sure if that's, can I get a thumbs up or a thumbs down? I think I have a thumbs up. All right, two thumbs up. Let's take questions, I will say though, please don't discuss a personal legal situation in your question. Privilege does not extend to the DEF CON floor. We have a question. It appears we do not have a microphone for this fine gentleman. So you're going to have to scream. First of all, you sound like you have a really cool job. Second, whether we take it in Bitcoin, I don't see why not, but that's going to have to be something that will be discussed at the board level. I think right now the, I don't think that we have the mechanism for that, but I think that cryptocurrency, I mean, EFF also takes donations in- We do take cryptocurrency donations. So yes, I think eventually, now whether we want to take it from a shadowy quasi-person who has been collaborating with people who are possibly under sanction, that's a different matter. But thank you. All right, I think we need to go to make room for the next panel. Thank you all so much. We really appreciate you being here. Thanks everybody. Let's take a group picture. All right.