 So it's 9 and we're starting on time which feels just wrong. Why are we starting on time? This is Defgun. Oh, no. I do this to please Proctor. So we've got a number of people on the panel here. I want to just introduce you to them briefly. They'll get plenty of opportunity to talk. And then I want to give you a little background and then we're going to take off. So on the far end of the table we have Corbin Souffront from Leviathan Security and then we've got Jennifer Granik from the ACLU. Superhero to hackers around the world. We've got Runa Sandvik from the New York Times. We've got Pablo Bauer from the Brewer. I'm sorry. You know it's funny because I've only read your name. I've never actually said your last name. From the Donovan group. Mark Rogers aka Cyber Junkie. And then Chris Krebs from Syssa. Okay. So why are we here? We want, Defcon is about building community. And I'm always looking for another opportunity to either build bridges or build relationships or solve problems. Just like we started Defcon in China to build bridges with hackers in China. Because they're just like us trying to hack shit. But the relationship between researchers wanting to report vulnerabilities to say the government is very strange because there's a lot of apprehension. So is there a way to create a process for people to report vulnerabilities, hackers, researchers to report vulnerabilities into the cert of U.S. cert to help do some good. Now if you wanted to make money, there's a million ways to monetize vulnerabilities. If you want to get recognition, you compete in bug bounties. You get your name attached to bug reports. This is sort of an edge case. And we don't know how big the edge case is. We don't know how critical it is. We don't know how many people are withholding, not reporting or maybe monetizing because they have no other avenue to report and do the right thing. But it's interesting enough that we want to find out if there's a way for Defcon to act as a facilitator and intermediary between people trying to report anonymously and then U.S. cert. And to explain how Defcon even got into this conversation, I want to first send it over to Pablo and Mark a little bit. So maybe just give us a quick history of how we got here. Why Defcon? Yeah, thanks, Jeff. So we all have different lives. We've got personal lives. Moving closer to the microphone. We've all got kind of different lives. We've got personal lives and professional lives. And I've been fortunate enough to be involved as an attendee in Defcon since Defcon 4. So it's been a big part of my life. Yeah. And because I've been in the community for a while, people in the community will occasionally reach out to me and say, hey, look, I know this thing and I don't want to explain how I know this thing or I'm afraid of the repercussions, but somebody should do something about that. And that's great and that's wonderful, but thank you. And that's because they knew you weren't concealing that you were involved with government. No, no, I'm not. See, I'm an active duty Navy officer. I was young and I needed money. But because I've been in the community for a while and I've got friends at the community trust. And so myself and my partner at the Donovan Group, JJ Snow, got to think and we said, you know, we can do something about this. We can have a long-term impact. And so we called on Mark, who kindly volunteered his time for about two years to come down and help us kind of scope how this could happen. How could we link the hacker community to a national cert? What kind of assurances would the hacker community need? How would those technical implementations happen? And what projects exist that we can leverage? By the way, is there an open source thing that we can contribute to as a shield of good faith? And so, Mark, do you want to? So then Mark gets a call. So then Mark heads down. So hopefully most of you know me. I am a hacker. I've been a hacker all my life. I consider this community my family. And so when Pablo reached out and said, this is what we want to build, I immediately had some concerns about how could we do something like this? This is a really interesting and challenging problem. And if there's one thing that hackers like, that's challenging problems. Plus, they gave me access to a really sweet hackerspace that has lots of toys to play with. And so it was a pleasure to kind of come down and play with the toys and help them with this. But we set about trying to work out, you know, what is the safe way that we can do this? And the most critical thing is how can we foster trust in this? Because even just thinking from my perspective, how can I trust a process to disclose something to a government entity without there being any potential repercussions for me if I wanted to ensure a problem is solved? And so we looked at architecting the solution. And I have to say there's still many, many open questions. And that's one of the reasons why we hope you guys can feed into this. Because I think the only way that this works is if the community is behind it and the community helps shape it. So then it was supposed to be announced last year. So here we are year later. Timelines, you know, things got complicated. But then all of a sudden things started moving really quickly. And there was some technology that was explored and audited. And so now it's starting to become real. And so now I'm getting more and more involved. And it's like, okay, well, where would the data go? Who's going to get these reports? Because there is concern, Pablo could talk about. There is concern that if it goes to the wrong agency they're going to sit on the data or they're going to weaponize the data. They're going to hold it from other agencies or industry or manufacturers, whatever. So in my mind it's pretty much well, it needs to be a civilian organization. Can't be a military organization that gets these reports. Right? And who's historically always gotten these reports? That would be U.S. CERT. And I have good relations. I know a bit of the bureaucratic inside of DHS. It just made sense that it would be a civilian partner. Well, the other question is, well, if you know the government, the right hand left hand problem, I asked Pablo, what's my biggest risk? Like DEF CON, what do you see the biggest risks to DEF CON for doing this? I don't know if you want to tell that story. Yeah, so I had a lot of very interesting conversations. So as a government person calling DEF CON and going, hey listen, I want to work with you. You kind of get the side eye a little bit. So Jeff asks, what is the concern? I said, well, Jeff, here's my biggest concern. The two biggest concerns I see from the hacker community is that we, as the government, mess this up and burn the bridge again. For DEF CON, the biggest concern is you've got tips from some of the smartest hackers on the planet, and this is going to become the best target ever possibly for foreign nation states. And so it is something that is done with significant risk, but hopefully in good spirits. And Jeff was kind enough to go, well, let me talk to my folks and let's get together and tell me the story and take back. Yeah, so I immediately called Jennifer Granik. I'm like, okay, lawyer, what am I getting myself into? What are the risks here? And Jennifer's been on the defense side forever. And I'm mostly your whole career. Oh, what, microphone? I think you and I met DEF CON 3. Yeah, it was really early one. And so I called you up and I explained, this is kind of what we want to do, what are the risks. Do we have to build a whole new data centers, or a new location in case there's a search warrant and they take my web server by accident. You know, how does it work? And so Jennifer was stepping me a little bit through the legal concerns. Yeah, I mean, so you first, you have the legal concerns for the people who are the reporters, which is, you know, the vulnerability finders and reporters who are usually the people that I've worked with in the past. But so put that set of people aside for a second and think about what are the legal risks potentially for DEF CON? And I think there's, you know, sort of the substantive problem of what information is going to be on the server. And then the kind of procedural legal problem, what happens if somebody shows up with a subpoena or a search warrant and wants this information? And so to some extent, you know, you think about air gapping your machine to make sure that if, you know, something does happen to it, and it's taken or taken down, that, you know, the rest of the, the rest of the stuff is still available and it still happens, you know, still can proceed. You don't lose the entirety of DEF CON functionality if the box is taken. And then obviously the technological concerns about if it is taken, if somebody does seize it, making sure that, you know, they're not going to be able to see anything without any opportunity for us to get into court and to challenge it. But I think, you know, one of the problems is that search warrant authority goes from, you know, people in the smallest towns or police officers in the smallest towns in America all the way through to, you know, international organizations until you meet, you know, with varying levels of sophistication and understanding of the goals of a vulnerability disclosure and of how the legal process ought to work. So you've got to be ready for responses from all those, all those sets of people. And then I think there's the substantive problem, which is what kind of information might people put on this particular box. You know, it's going to be a track, you know, vulnerability information is going to be very attractive. As a target, there could be, you know, people giving, you know, sort of information about their coworkers. I think so and so is a spy or something like that. Is that defamation? And you know, we give that to the government. What's the issue there? And then of course, one of the things that as a privacy lawyer, we see a lot is contraband information, usually child pornography. So you have to be careful and have a process in place for how you're going to deal with if you're going to see the information, how you're going to deal with all of that. And really in some ways, you know, the best situation is one where the intermediary who secures the box and keeps it up and running and you know, is responsive to the community who are people who are going to give give information doesn't actually have the ability to see any of the information on the box at all. That's the best. That's the best way to protect everybody, I think. Yeah, engineer yourself out of the problem. Absolutely. Yeah. So I want to get to Chris, but I want to first go to to Runa, who's I'm not I don't know if it's responsible, but highly involved in operating the New York Times. Yeah, yeah, and maybe the parallels between this problem and what you face. Sure. So I'm the senior director of information security at the Times have been there for about three and a half years, closer microphone. And before that, I worked for freedom of the press, consulted with different media orgs help set up and support secure drop, which is a system that allows people to anonymously submit tips to anyone hosting the instance. In this case, it's the New York Times. And so back in 2016, we found that there was there was no way for a source to contact the New York Times, the newsroom full stop. It was a case where sources had to build relationships with reporters and establish that level of trust and agree on a method of comms before they could communicate or before the source might be comfortable sharing info. And so we then set up our tips channel, which has signal and WhatsApp and secure drop and a couple of other options to allow anyone really to send anything to the New York Times. So we then developed a process internally for who's going to check the submissions. What do they do with the valid submissions, the things that we consider legitimate tips? How do we safely share that with the rest of the newsroom? What do we do when we get info that isn't a tip? What do we do when we get information that really should be sitting with law enforcement in some cases? And really build that out and have been running that very successfully now for about three years. And before that, I think back in 2015, the town center for digital journalism did a survey of I think a dozen media works with secure drop to sort of answer the question of is this system actually helpful? Are you getting legitimate tips? Are you getting valuable data from having the system? And I think back then everyone said yes. There was sort of like a yes. But I am also getting a lot of crap. And I think it was Gokker that said that we actually get far more just crap and memes and images that we don't want to see. But having the system is far more like the value that we get from it does add way getting all of that crap in the first place. So, so then after talking with everyone, I was thinking, okay, there's something here. There's probably a benefit to the community. The risks can probably be engineered out or managed. Now we need to talk with who could be a possible partner. And I think that's when the conversation started with with cert and end kick. And and so I want to introduce Chris Krebs, who's going to talk about it from the other perspective, having this sort of brought. So can we be interactive here? Yes. All right. Let me ask a question. Show of hands. Is anybody heard or know what CISA is? Those that work for me don't count. All right, US cert. Show of hands. All right, better. So US cert is CISA. CISA is the cybersecurity and infrastructure security agency. We like security so much, it's in our name twice. So we are new by law as of last November. Created out of a portion of the Department of Homeland Security set up as an operational agency on the level of other agencies. But we are the advocate within the government for the researcher community, the private sector, you know, kind of team internet. We have managed vulnerability reporting processes for years now. Through the US cert portal. So first kind of first principle, though, is always, you know, our preferences, the vulnerabilities are disclosed to the vendor. Understanding that that doesn't always work. The community is not mature enough necessarily across the board, not you guys, but the vendors. So there needs at backstop. So US cert has a capability for reporting vulnerabilities. On the IT side, we've contracted with the federally funded research and development center at Carnegie Mellon University, the software engineering institute. They run the cert CC process. So you go there, you enter on the information, it's anonymous. As far as I know, we've never had a breach or spillage or any sort of disclosure. I think this year through June, at least we've already managed or triaged almost 8,000 vulnerabilities. So we think we have a process that works. Now, the challenge here is that I've got a numerator. I don't have a denominator. I don't know what the potential for vulnerabilities reporting. There are still clearly through this conversation, or at least very much potentially through this conversation, some that still have reluctance to engage with the government to have these issues addressed directly with the government, and there needs to be some sort of arbiter. So, you know, what we're thinking, I think Jeff mentioned it up front, is what are the edge cases out there? What are the impediments or challenges that the community sees in terms of reporting through the standard process? Again, we think this is a successful, maybe I'm throwing a random number out 95% solution. What does it take to get those 5%? What are the concerns? How do we do this in a way? You know, we talk about risks. Yes, this could be targeted by foreign intelligence services, but it also could be other things put in through the process. You know, I think about the junk, the memes, the other sort of collateral that could come through this process that we don't necessarily want to work with. So what I'm interested in is figuring out, A, what are those kind of edge cases or the maybe the best way to put it is, you know, we can't take an approach where I'm from government, I'm here to help. We've got the answer for you. I need to have more of a customer service mindset. So we think we have a product. If it's not answering the 99.9% of the problem set, what do we do to get over that final hump? And so this I think secure drop is a conversation that's useful in terms of closing out that top end or those more potentially highly valuable vulnerabilities. But how do we get there? And how do we do it in a way that both manages the engineering piece, but also some of the administrative, the infrastructure stuff on my side, the people side. So kind of top of mine, at least for me right now. Okay, so I think maybe, Paolo, you want to mention that some of that sweet government money was spent on an audit? Yeah, so when we did the initial meetings, we started to take a look at existing architectures. We found the freedom of the press secure drop. It was open source. It had been tested. It had been in use. And so we were able to convince some government partners to fund a code review. We wanted to be very transparent about this. We reached out to freedom of the press, who cautiously, nervously took my call initially until they were sure that we were transparent. And we got linked up with their dev group. We paid for a code review that we asked Leviathan security to do. We shared the report with freedom of the press and the dev group. We said, here are the things that we found. Here's how much money we've got to contribute to dev fixes. What would be your priorities? And so between Leviathan and the freedom of the press development group, they set out the priorities. They filled us in on some improvements that they've got coming that they hadn't announced yet. And so they said, don't fix those. We've already got fixes for those. We'll take the other fixes. And so Leviathan went through and worked right through the GitHub. And all of the fixes were submitted and have been accepted now into the main branch and are available for everybody else to use through the official secure drop repo. So all the way on my left, your right is Corbin, who led that effort at Leviathan to do the security audit. And I thought it'd be interesting just to, what's the quality? What, you know, how much confidence? What do you think of the technology? Yeah, so we kind of did, it was a code review of everything from like infrastructure. Secure drop has to be deployed by individual companies. Freedom of press doesn't deploy it. It's a product. So New York Times has to spin up their own servers and everything and every other organization that uses it. So we went through the documentation on, are they recommending sane deployments because IT, like a news organization may not be like as great as a, like maybe like a regular company. But so we went through that. We looked through the cryptography to make sure that our things being submitted, actually being submitted securely, is there a risk of someone being de-anonymized or man in the middle? All of that sort of effort. Overall the code base, like it's an open source project and it's used by important people. So it was pretty well put together. We didn't come across any remote execution or anything critical like that. And as Pablo mentioned, after we submitted the report of our findings to Secure Drop, or sorry, to Freedom of Press, we then went and made some contributions back. The biggest contributions were wondering how they could securely delete files. Because you don't want someone to come in with a subpoena, take your servers and just forensically grab everything off the disk and figure out what was submitted. And then we also are working with them to set up a prototype to provide end to end encryption for file uploads. Right now the file uploads are submitted unencrypted via Tor and then they are encrypted in RAM before being written to disk. So we kind of helped work on potentially making a browser plugin that would encrypt the files before they're uploaded via JavaScript in the browser plugin. One of the interesting things about that is a browser plugin can execute the signed JavaScript without forcing you to enable scripts on the browser. So it's a way you can have JavaScript encryption without risking someone manning the middle in your connection and throwing a malicious JavaScript encryption thing up there. And Freedom of Press wants to work in the future to potentially get the browser plugin that can do that. Maybe built into the Tor Firefox bundle eventually. But those are long term plans and they're kind of looking at what would be the best solution for end to end encryption because it would be worse if it was something that made it less secure. So that's sort of what we were working on as far as that goes. Okay so I want to really spend a little bit of time getting questions from the audience because the point of this was pretty much a community consultation. But we had to get you up to speed with what we've been thinking. So I don't know if anybody has anything else to say. Should we go to questions? Anybody have anything? Want to do questions? Let's do questions. I think do we have a microphone somewhere? Team Goon. No. I think I don't know if the microphone's got put in this early. So if you do have a question, yeah stand up and kind of shout it and then we'll repeat it so everybody else can hear it and can get some of the recording. So raise your hand. Anybody have a question? If not, okay this gentleman over here. So the question was Defcon is sort of international now with China and you assert is international getting reports from all around the world. Is there any potential future for either sharing or working with say European certs or others without it? You know Pablo not at his head and I was thinking it will absolutely. I mean cert to cert relationships across the world. We don't differentiate you know again certs or if you look at some of the norms that have been agreed to across the country. Certs are that kind of DMZ that space that is protecting the overall ecosystem. So you know first blush at least right now. Again still working through some of the Thornier policy and legal questions but I think that's ultimately where it has to go. Because we're not just talking about US vendors. We're talking about a global community instead of companies that will need to coordinate these things and do handoffs and things of that nature. But there's likely again getting into the Thornier policy principles. You have to think through what are the implications of sharing with certain states or countries that may not share our same system of values. And you know what are our triggers and thresholds to walk through that process. But I'll pass that down the line. And I'll say in an ideal world the Hacker community is a global community. It's not just a US community. And so whatever ends up being built has to support that global community. And yes that means there are significant challenges ahead in terms of how we handle certain things. How things get disclosed etc. But that absolutely has to be a priority. Go to one piece here. You know my ultimate concern is when I when we receive vulnerabilities we do not turn around and share those with the intelligence community for exploitation. We have a bias to heavy press preference and almost an argumentative and posture within the federal government on disclosure. And that will be one of the considerations when you think about some of the nations and states that we engage with. How are they going to use this. What are they going to do with this. And what is that again. What is the arbiter. So maybe that's one of the pieces to think through is is is can DEF CON play a role in facilitating some of the or at least oversight of some of the engagement. Okay let's do another question. Anyway okay this gentleman in the front and then the gentleman in the back. So I think I want to try to summarize the question. He's a he's a participates in bug bounty programs and as tester. And he says before he even would get to the point of reporting he generally he wants to make sure he can test properly on the correct server. And so could this system or something similar help in the future enable test of government systems to to sort of make sure before you report that it's feasible. So I mean that's not contemplated in what we're talking about. I think if you are a tester you don't have a problem being identified reporting and so you'd probably go through all the normal hack to Pentagon or other existing programs. And if you have a problem with a specific technology used by the government maybe you'd be able to test that technology somewhere else because we're not necessarily interested in the threshold of you have to reproduce it so we give you money. It's more like we want to know if there's a problem. Is that it. As I was going to say is bug bounty programs have a very clear set of rules of engagement and there is an implicit invitation to come in and look at that infrastructure and find stuff. This is to cover these edge cases. For example there are edge cases where people just through new just through normal operation will find vulnerabilities or will find hints of vulnerabilities. And we're not necessarily we're definitely not inviting people to go and hack infrastructure to find vulnerabilities that are not covered by bug bounty programs. What we're saying is if you find something or if you see something we want to give you a channel so that you can disclose it and so that it can be triaged. Hey Jennifer I can see you chomping. I'll just I'll just add that you know the riskiest the legally riskiest part of vulnerability disclosure is research because of the legal rules that arguably either arguably or definitely limit what you can do with somebody else's box or somebody else's data. And I think that you know what we there are other means out there to report vulnerability information. But one of the things that anonymity can do is protect you know good good faith testers by not allowing them to report without actually revealing their identity. And I would say that you know it's always you know if you can it can be a good idea to consult with a lawyer if you're planning something that is potentially legally risky with somebody else's machine or somebody else's data. But it is going to be risky and ultimately the answer really you know can't always be no right there's going to be times where unfortunately people are going to have to take on that legal risk. It's good to do it with the benefit of of a good consultation ahead of time. But I think that you know one of the goals of the project is to be able to remove that legal risk which is a disincentive for people to report things that really should be reported. Can I do a show of hands? Well this might de-anonymize you. But anybody in the audience do you think a system like this would be remotely useful or you can see its use or utility? Show of hands. Well that's actually more than I thought. So there's a quick question I want to ask as well which is I've been a security researcher for over 20 years and I have run into scenarios where I have found things just through browsing the internet interacting with an application and I've struggled to disclose them because the company has no disclosure policy. Company doesn't have a bug bounty program. In some cases it was decades before bug bounty programs existed. How many of you in the audience have run into issues potential vulnerability etc but you haven't been able to disclose because there is no process or because there is concern about how you would have disclosed it? Can I put a second order question on top of that? Is what would be the impediment of reporting it through the standard U.S. cert process? Is it you just don't like PGP? I mean that's what I'm trying to get through is like what are the use cases that the standard U.S. government SISA process, U.S. cert process isn't addressing the requirement? So going back a few years that process wasn't in place. But the other issue is there is always concern when you find something about how various entities are going to react. I have been legally pursued by companies for finding good faith vulnerabilities and so protection would encourage me to go forward more. This is essentially the Kaminsky problem. Dan Kaminsky when he found his famous DNS bug it took him nine months of his life to coordinate with all the other affected parties and he felt really good and he made great change and after that he said never again. Like I'm not doing that twice. And so sometimes people find a bug and they want to drop it and they want to walk away and get on with their life because they can't commit to a certain level. So if there's a problem I think you got to comment and we've got to go to the next question. Can we get the vote though? Oh yeah, the vote. Yeah, yeah. Mark's question. You've had trouble with vulnerability reporting where they had something they wanted to contact someone and reveal it but they just had difficulty figuring out how to do it or doing it. So who's had that? Okay, thank you. Oh Pablo, yes. Okay, did you have a comment? Let's go to the next question. No, I just wanted to clarify the last question. If the question was about security testing, the secure drop instance, I would be that so first of all the software is up there and it's open source. Second of all it's not going to be hosted by the government. We're working with Jeff and I think we're going to be able to get DEF CON to host the servers but I would be very interested in seeing if we could sponsor maybe a black badge competition in the following years where we can ask the community. Oh yeah, hack the system and help improve it. Hack the system and help improve it. We want to make sure that the system is secure not just for the US government instance but also for the Prud's instance. Yeah. Okay, wait, we have to go to this gentleman's question. Maybe try to summarize that. Do you want to? Yeah, so the question was this bulls down to trust and that the US government has frankly done an atrocious job of working with researchers, working with the press, you know, going after people, attacking the Tor network. So how do we work on that? How do we work on the outside entrust and the inside out trust? Hopefully this is a step. We're trying to be as transparent as possible. It's going to be tentative at first. Not everybody's going to trust it. There are going to be issues. Somebody's going to find an issue with secure drop. Hopefully we get it fixed. Somebody's going to submit a bug and feel that it wasn't handled correctly. There are going to be missteps. Absolutely. I think the intent and the good faith need to be there. We need to be transparent. So one of the main reasons that I got involved in this is because I believe that that trust is the critical element that will make this work. And I want to see you rip this thing apart and find issues with it and point out these flaws and so that we can work through it. And it may take iterations to get to a really good trustworthy product. But by engaging with the community and having the community do that work, I have confidence that we can get there. What I'll add here is that we're not going to engineer trust. And it's not a single solution approach. I think there was actually a really good conversation yesterday in one of the Hewlett suites about some of the legal issues. DOJ plays a role. They need more guidance and more clarity on what the things and how CFAA comes into play here. But in terms of where I sit, I am not the IC. I am not law enforcement. I am the private sector's advocate within the federal government. I think we have a pretty good track record, at least from where I sit and what I've been told unless they're lying in my face. I think we can manage this. I think we have a good ability here. But trust is a two-way street. So there's obviously, based on this conversation, based on the feedback today, that trust isn't where it needs to be. But this is a maturing discipline. It's a maturing conversation. There's still obviously a lot of work left to do. Okay, we have time for one more question. You've got to make some noise so I find you. Okay, there you go. If you want to run up and answer, ask the question so it's closer to us and then we'll repeat it for everybody. Yeah. Yeah. Yeah. Come on down. Yeah, so interesting question. Do you want me to? Yeah, you and Runa have a go. Okay, I don't know the answer to this, but it's a great question. Basically, a potential summary of this project is witness protection for hackers. And the federal government has experience with witness protection in other contexts. For example, in the law enforcement context, for this project, would it be beneficial and what would we want to do in order to take the knowledge and lessons from witness protection legally and otherwise, security wise, in these other contexts and how might we apply that knowledge to this particular project? Now you've asked the question. That's not fair. Well, so unlike witness protection, we're not dealing with, you know, a physical person, it's information and so it would be more about protecting, separating the person's submission, you know, to prevent de-anonymization, the segregation. And I think that's where the idea is DEF CON operating the servers in the middle. We get the Tor connection to us. The magic happens, the Tor connection to CERT. So CERT never sees or can discern the IPs that the, you know, exit node and back and forth. There's two separate Tor instances running. And so there is some operational, it's a little bit of sophistication, but as Chris says, we're not going to engineer, trust, but there's a lot of things we can do to reduce the risk, you know, like files are deleted. So let's say an analyst at CERT gets a file, well then they download the file and they delete it off the server. So if the server does get owned, you only got what's there between the last time an analyst downloaded, right? There's operational things we can do to make it not a juicy target, right? It's not, there's not a lot of stuff there. There's like six hours of things there. Kind of, and I think that's why I like consulting with Runa because she deals with this with real people in other sketchy countries where life safety is at risk and they all are trying to do the right thing, but at a significant risk. I think one of the, wait. I actually wanted to add a question because I think that's a great question and I think that a challenge that might pop up in this case that we don't have in the media context is if a source submits something to the New York Times, our role is to report on the content. Our role is to verify if it is accurate and then report on the content. We do not work to de-anonymize the source. We do not work to figure out who they are and how many lost. You don't try to put the reporter in context. You just try to take their information and validate it. Correct. So we take the information. We do what we can to validate it either by communicating with the source through SecurDrop or through some other channel. If we can, there's then a process in which the reporter would have to get the information verified through other channels, but our role is never to try to de-anonymize the source and I think that that might be a concern in this context. If you submit something through this process, would the government then make an effort to figure out who sent the information? Yeah, just to add to this, one of the goals of the assessment that we did was part of the attack surface is, depending on where an actor is sitting, trying to de-anonymize the source, like how much damage could they do? Like if someone owns the DEF CON server and has some implants sitting on there reading everything that comes in, we looked at where would they actually be able to see the information and what would they be able to de-anonymize. That was also why we looked into after you pulled the data on to off the server and the journalist looked at it, how do you actually get it securely deleted so that even if someone comes in and takes the server physically and tries to analyze what was on there, like how do we prevent anything from happening, like as soon as it's read by a journalist, like delete all evidence. So hopefully it doesn't get to the point of having to deal with any kind of witness protection thing, it's a reported and everything's deleted and then that's it for the source and they don't have to look at it again. And I'll honestly, I'll tell you honestly, one of my plans is if there is a little engineering to do it's to make sure that DEF CON can honestly answer a subpoena request that says no we don't have the keys, we can't tell you what's on the server, but FBI, you know, you assert can. So you part of the government, go talk to you part of the government and I'm going to be having a coffee. The idea is to get us out of the middle if there's an internal dispute and I'm sorry Chris but you might be in the middle of that. So we're kind of coming to the end of the time, I want to have one more question of the audience. So after hearing this, since this is our first ever sort of consultation after hearing this, obviously there's a lot more discussions to be have, but by show of hands who thinks DEF CON that we should pursue this? Okay, who thinks this is the most catastrophic disaster thing that's you know a threat to DEF CON. Okay, well a threat to DEF CON, yeah there's many threats to DEF CON. You're all the threat to DEF CON. All right, so if anybody has any concluding remarks I think we're done. I just have one little comment I think just to kind of sum up, what I think I hear from people in terms of I see a lot of support obviously, but what I think I hear from people in terms of concerns is that you know there's a heavy reliance on technology to do the protection, the hard work of protecting the community of people who are going to be reporting, but what I'm hearing in the comments where people are concerns is that technology, you guys know better than anybody else that technology can fail, and there are other things in terms of trust processes, relationships inside the government, governments to governments, you know having more of appreciation for the importance of research and less of a punitive approach by law enforcement and that these other non-technical human legal policy relationship parts are the areas where people are really feeling some concern and want to make that stronger in order for a project like this to really actually be trustworthy and beneficial to the to everybody who's involved. That's what I'm hearing. Yeah, and I'd agree with that. I think it's been a good conversation, helpful feedback. I mean I think if we look at this, the US government's littered with a graveyard of good ideas and so we need to manage expectations, do this in a way that maybe is pilot based, but let's really focus on maybe some of the use cases. I think you guys know this stuff really turn well and just tell us why, you know, what are some of the examples or hypotheticals where this might be a useful, I mean obviously in implementation a thousand moral come up that we never anticipated, but it's always helpful to kind of scope the issue and start small and spiral it up. All right, well thank you for participating. We're around, we'll be here all week around to answer questions and thank you very much.