 Hello everybody. Welcome back. Before we start with this session, there's some information that some of you might find interesting. Some of you need certificates of attendance for claiming expenses back or for your internal audit processes. As a new feature, if you go to your IECR membership page and you click on some stuff, there is the certificate of attendance already produced for you. You don't have to ask anybody for it. You've already got it in your IECR thingy. Thank you for that. Over to the session chair. Welcome back to the afternoon session. Hope everybody had enjoyed your lunch. In the afternoon, we will have two consecutive sessions. First session is going to be about policy and I'm very happy to have Jennifer talking about crypto awards. Thank you for having me here. It is an honor to be in this illustrious community and I appreciate so much the work that all of you are doing to keep us safe and secure. So thank you for having me. I am the ACLU Surveillance and Cyber Security Council and I used to run the Stanford Center for Internet and Society and I am a board member for Let's Encrypt, which is, as you know, a non-profit now of the largest certificate authority and I see a number of my Let's Encrypt friends out there. So hello and thank you. So this area is very important to me as a policy matter and that is basically what my work at the ACLU entails and I want to just sort of come out in the front and sort of state what my prejudices are. And even though I'm a lawyer, my prejudice is that I don't believe that the law alone can provide enough protection to ensure civil liberties, privacy, security and human rights. I think that technology is required in order to achieve that goal and that is for me the political or policy reason why I care so much about the crypto wars. So this is a pretty sophisticated audience and I don't have that much time and so I don't want to take too much time going over things that people already know. So I'm going to assume some knowledge on the part of this audience of the history of the crypto wars through the 90s of the Apple versus FBI litigation and I'm going to sort of focus specifically on today. And I'm going to focus specifically on the US policy debate as a US lawyer. This is what I'm most familiar with. Although if people are interested, I'm aiming to be able to take some questions towards the end. If people are interested, I can talk a little bit more about what's going on in some other countries other than the United States. So the cryptography policy debate in the United States today is quite different than it was two years ago. And I think two years ago I felt pretty optimistic. After the Snowden revelations we were seeing this very successful push on the part of companies providing communication services to individuals to encrypt their products, encrypt their networks. We saw Apple's victory in 2016 over the FBI's effort to force the company to create new software in order to be able to defeat iPhone security. Facebook turned on end to end on what's at by default in 2016. And we're still seeing this progress in encryption. Facebook just announced that it's going to provide end to end encryption on its other messaging products in Instagram and Messenger. And Apple is still continuing to innovate with the iPhone, making it harder and harder to break in even as forensic companies like Celebrate are basically engaged in that kind of war making it true to crack in. But despite these optimistic signs I think today the conversation looks a lot different. We are a long way from the time in 2017 when the Australian Prime Minister declared to General Ridicule that the laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia. And people laughed rightly so but today Australia has a law that gives its intelligence and law enforcement agencies the ability to present a technical capability notice which would require providers subject to that law to create a new interception capability if it's needed in order to conduct wire tapping. And we don't know exactly right now what this law really means. The regulations are still being negotiated and babbled over and haven't really gone into effect yet. We don't know how the provision is going to be incorporated but you know this is Australia has this law, the UK has a law that has some similar properties. And these countries particularly the Five Eyes countries, UK, US, Australia, New Zealand and Canada have gotten together and presented this kind of concerted uniform effort to raise the law enforcement and intelligence problems with secure strong encryption. Basically pointing out that these obstacles to law enforcement and counter terrorism that strong encryption that strong encryption brings are a real danger to the soverties of this state, to the state executing its public safety mission. And while the Five Eyes countries haven't done anything their statement does contain a veiled threat. If you, the companies aren't going to do anything should governments continue to encounter impediments to lawful access to information it may be necessary that we pursue technological enforcement, legislative or other measures to achieve lawful access solutions. And so this is basically a shot across the bow or a warning message to tech companies that provide secure encryption to individuals. And it comes at a time where these companies have never been under as much public scrutiny as they are now and have never really suffered the kind of public relations, negative public relations that they are suffering now. So it comes at a time of particular policy weakness I think for the communications medium that so many of us use. Maybe not so many of us necessarily in this audience but so many people around the world use. We may all be signal or wire users but what's up has a billion users. It's very impactful what ends up happening there. So I think you can see in the Five Eyes statement that there are a couple of unspoken assumptions in the statement that I want to just kind of pull out. And the main assumption in the statement is a certain of the propriety of trust. The assumption is that these western English speaking common law countries have governments that can be trusted that their access will be lawful and that it will be proportionate and that is for a public safety purpose. These are human rights respecting countries. They have like actual surveillance laws that are meant to constrain the power of government. And so you know when an agent of this one of these governments comes forward with legal process the unspoken assumption is that it's going to be for a legitimate reason. So I don't think that that unspoken assumption should be assumed. We have seen from the Snowden revelations and you know as an ACLU lawyer I can tell you many many more that even the best of these governments and I personally do believe that the United States has you know the strongest or among the strongest surveillance privacy laws of you know countries around the world. But our laws are not enough. Our government disregards them. The laws are too weak. There's all these workarounds. Nobody knows what the laws are. I mean just one small example when law enforcement comes forward and says you know we want lawful access pursuant to court order. The thing I hear as an American and as a lawyer is not pursuant to a warrant which is our basically our highest form of legal process protection for private information. And I think they say that because we are in litigation in number of cases. We've appeared as friend of the court in cases where the Department of Justice is arguing that email is not protected by a warrant requirement that they're able to get email without getting a warrant first. So when you have this kind of disingenuous you know warrant proof area which is one of the you know kind of law enforcement talking points. And yet on the other hand you have law enforcement saying we don't even need a warrant. It really doesn't give you a lot of faith that you know we can have trust in the assumption that this is going to be a power that's going to be exercised that's going to be exercised with restraint and only with the strongest of either with the strongest of protections to ensure that it's not abused from unnecessary. Nevertheless domestically here Attorney General Robert Barr has made this fight the crypto fight one of his signature issues for his tenure in the Department of Justice. And so he's really pushed it forward but I just want to point out and make clear that Attorney General Barr does not represent or the views of the United States government more generally I think this is pretty obvious when you think about it but worth saying as part of the policy debate different agencies inside the United States government feel very differently about the value of strong encryption depending upon where you work. So you can imagine the State Department is interested in human rights. The Commerce Department is interested in innovation and economic competition. You have agencies like the Federal Trade Commission which are concerned about consumer privacy. And you know even within the Department of Homeland Security you have a agency like the Cyber Security and Infrastructure Security Agency which you know has expressed the importance of encrypting sensitive data but you have immigration and customs and the Secret Service that feels the need to you know have this information. The NSA itself has different equities than law enforcement agencies do because the NSA has different constraints than law enforcement does and so these government agencies are not in lock step. And I think it's important to be very aware of that but from a public presentation it certainly seems very much because Barr has taken this front and center point that this is a real push by this administration. That's something that he's putting a lot of time and his personal credibility on the line for. Now the United States Senate seems even more strident and more bipartisan united against strong end-to-end encryption than you know the executive branch. So just last month there was a hearing about encryption before the Senate Intelligence Committee and at the hearing there were witnesses from Apple and from Facebook they testified and you know the senators were extremely critical on both sides of the aisle. They were not very forgiving of companies that implement strong encryption and they were pretty strident about it. Senator Graham said to the company of witnesses basically we're giving you a year and you should either do it or we're going to do it for you. So just you know was this a threat of legislation. Senator Diane Feinstein my senator from California said she's never been more exercised or you know sort of committed to an issue as she is to this one. And Feinstein mentioned that she might bring back a bill that she had introduced in 2016 with Senator Burr that would basically outlaw unbreakable encryption and mandate that companies that provide encryption have to also be able to disclose plain tax to law enforcement upon receipt of a court order. That bill went nowhere then but as I said I think times have really changed in terms of public perception of companies and this renewed government interest in trying to do something about the problem of law enforcement access to plain tax. Obviously this is not uniform. Some members of the House of Representatives have taken a different tack. Representative Zolofgrin also my representative from my district in California and a bipartisan set of House representatives introduced a bill that also went nowhere that would prohibit intelligence and law enforcement agencies from forcing companies to insert encryption backdoors into their products and services. As I said that bill went nowhere. Congress is pretty dysfunctional. I think I can fairly say that and so to people who find the status quo more satisfying than what a legislative future can be. We have a little window of time here while Congress is just flailing about but I don't think we can count on that to continue to be the status quo because I think some things have changed since two years ago. In my view those are three pretty important things. One is that in the discussion of strong encryption there's been a shifting focus from counter-terrorism to interdiction of child sexual abuse materials or CSAM which is the more inclusive term today for child pornography. So a shift to the concern with CSAM and with other kinds of online instances of child abuse such as predators grooming children for abuse and for attacks. I think that the conversation number two has evolved into bite-sized pieces that are easier for each one to chew. I think that the kind of frontal attack on encryption as a whole and the idea that we need backdoors is something that advocates for backdoors have realized is not like a persuasive rhetorical strategy and didn't get them anywhere because encryption is something people began to understand as something we need to protect our privacy, security, human rights and backdoors were uniformly seen as illegitimate or improper in some way. And so now the move is away from talking about backdoors or talking about encryption as a whole and kind of dividing the world up into different types of products and services. And I'll talk a little bit more about this in a minute but for example dividing the considerations about on-device encryption for data that's at rest versus encryption for data that's in motion. And then the third thing I think is different is that to some extent there are a number of important government thinkers who are realizing that for the near future and maybe for forever, entend encryption is here to stay. And so if that's true, what are we going to do about it? And there's been a more creative conversation going towards the idea of okay, if we have entend encryption and we don't have a reliable promise that we're always going to be able to access plain text, what else can we do to a, may interdict abuse or crime online and be to investigate it after the fact if it happens. And I think this is a good thing. But I also think that we need to be very careful because these recommendations about how to manage online abuse, how to punish it, may include other technologies that undermine communication security in other ways outside of breaking the encryption. And I think a couple of these ways, and again I'll talk more about this, are law enforcement hacking, client side scanning for unwanted or illegal materials, something like defeating passcode rate limitations and a much heavier, much heavier dependence on metadata. And I think that one of the things I want for people to take away from this talk is that, you know, kind of contemporaneous with the end-to-end debate, which is still ongoing and, you know, explaining why encryption is so important to people's lives. I think that security experts need to think more broadly about the risks and rewards of other ways of undermining communication security and how these kind of alternatives are fail safes for, you know, the end-to-end debate end up possibly being, you know, also very bad and very dangerous for civil liberties and for human rights. Not all of these solutions are actually going to be good for society. And I think we need to get involved in that debate and begin to think more broadly about communication security as extending beyond encryption to other kinds of things such as metadata collection and retention and that sort of thing. I just want to point out that, again, another thing that's different is that the United States is considering these issues across an international backdrop that's changing even more rapidly than the policy debate is here. And I think that we are, because a lot of, you know, WhatsApp or Facebook or any of these companies are global companies, we need to, you know, basically understand that pressures that come from outside are going to impact what we are able to enjoy here in the United States. And that's not just with regards to criminal stuff, but other countries have put pressure on these products because of things like disinformation or fake news or hate speech, all of which are lawful in the United States but not lawful other places. So policies or practices that platforms may put into place to address those issues in response to government regulation or the threat of government regulation in other countries are going to have an impact on Americans here despite the fact that our law doesn't provide for that. So I want to give a little bit of a background on the current state of United States law just to give people kind of a sense of where we are. And in short, there is currently no law in the United States that requires design mandates or that requires weak encryption. But nevertheless, the Department of Justice and the FBI have sort of pushed the idea that, so okay, let me be a little more specific, that requires weak encryption for software or internet platforms. It's a different story on the phone network. But technical assistance is nevertheless an issue that the DOJ and the FBI litigate under particular statutory provisions of U.S. law and usually in secret. The FBI brings cases, the Department of Justice brings cases in which it tries to force platforms to make changes to products that they have in order to be able to provide plain text. And this is based on statutory provisions in the WireTap Act and also the All Rits Act, which, as you know, was an issue in the Apple versus FBI case. We, as the American public or any public, don't know what these cases are because they're litigated in secret. We only find out when something is leaked. And right now the ACLU has a lawsuit along with the Electronic Frontier Foundation, which is in the Ninth Circuit, where we are seeking to unseal a court opinion, which decided that the FBI could not force Facebook to do something to alter its voice implementation over Facebook Messenger. But we don't know what the government was asking for. We don't know why the court ruled against it. We don't know what other cases this is happening in and we don't know how often. So these things are being battled out. And so it's very hard to make responsible policy when we don't even have a sense of what's going on right now. Because forcing technical assistance is not working in every case, the Attorney General and law enforcement now have a few options. One is change the law to be more explicit that technical assistance is required, as the UK and Australia have done. The other is to sort of bring public pressure or moral suasion to the companies to try to voluntarily ensure that they can provide plain text upon request. And the third is to just accept the status quo and come up with other ways to meet law enforcement needs. And the answer that the government has found is sort of all of the above. We're going to try to do all of these. So let's talk specifically about how they're doing this. So one I said there's been a change in focus from counter-terrorism to CSAM. And I think that we found that that is actually a much more persuasive grounds to the public than terrorism was. At least here in the United States, terrorism is exceedingly rare here in the United States, whereas child sexual abuse is a huge problem. And there's been a whole series of New York Times articles about different kinds of child abuse online. If you're interested in this field, I really recommend you read those articles. I think it's very important to know where the sort of risks are for security. But I think that this has been a much more persuasive argument. And I'm not being purely cynical about this. I was a criminal defense attorney for many years. I've seen a lot of child pornography cases. And I believe that the internet has exacerbated the problem. But I also think some cynicism is warranted because we are not doing everything that we could do or as much as we could do to try to deal with the problem of sexual abuse for children. For example, the money that has been allocated in order to try to address the problem hasn't been fully spent. The Department of Justice is not doing things that it's obligated to do under the law in terms of tracking and reporting. So we have other tools at stake, but this is the one that's being brought up as requiring something in this particular area. The second thing I mentioned was breaking things up into bite-sized pieces. And so the first time I really saw this as a concrete policy suggestion to divide device encryption and the challenges there from transit encryption is in this very influential Carnegie report, which was mentioned in the recent Senate Intelligence Committee encryption hearing last month by New York's District Attorney Cyrus Vance. And he pointed out that we can do something with these devices. And the authors of the paper are very brilliant luminary people. Ron Reves, Professor Susan Landau, former FBI General Counsel Jim Baker. And so the idea here is the right one, which different devices have different cybersecurity risk profiles. And maybe there's a way that you can deal with the low-hanging fruit of device security while leaving transit security alone, because that's a much harder problem and poses different cybersecurity risks. If, for example, the compromise of device encryption requires you to have the device in hand, that may be a natural physical world friction or limitation that would be more acceptable as a privacy security trade-off than some other ones. And to some extent I think we've seen that this is kind of interfered with some of the unity that tech companies initially had in the Apple versus FBI fight as you kind of pick people off. Like let's try to separate the device people from the transit people and they each want to save themselves. And I think that's a big mistake. In fact, in the hearing before the Senate Intelligence Committee, the Facebook testifier said, well, the device issue is separate from us. You should look at them, Apple, because that's more promising. And I think that's a real problem. And then looking to other sources for law enforcement needs in order to find a way forward. And I want to say one thing about finding a way forward and then I want to talk just a tiny bit about other ideas for law enforcement satisfaction. But I think I have like 10 minutes left. So if people think they're going to have questions, I'm definitely going to leave enough time for questions. I'll stop in order to ensure that. So if you have questions, feel free to start lining up so that we can hear what you guys have to say. The first thing about finding a way forward is I think we've seen a number of reports that seek to find a way forward. The Carnegie report I mentioned. There's a National Academy of Science report. There's an East West Institute report. And all of these reports have brilliant, illustrious, thoughtful, well-meaning, just great people on them who are doing some very valuable and important intellectual work. But I think that all of these efforts have suffered from the same kind of underlying problem, which is twofold. One, the trust problem, which is that I don't think that we can trust the United States government to have only law as the protector of privacy and security. And they've tended to be very U.S. centric, so we're not taking into account trust problems that citizens have with other governments. And the second thing is the idea of what is it that we're trying to do. If the goal is to ensure reliable access to plain text, that's an assumption that that's the goal we should be seeking. But what I think is that technology has assured governments access to more information about us than has ever been available before. And technology has taken that privacy away from us. I think these reports have to hold at least equally or more so the question of what can we do to assure that information about us is not going to be used indiscriminately abused or directed towards human rights abuses or civil liberties deprivation that's going to be used properly. If you ask the question a different way, you get totally different answers and you get a whole different sense of what the risks and rewards are. Ultimately I think these reports are difficult because they always come up with these very thoughtful questions, the answers to which nobody knows the answer to and are not forthcoming. They're, you know, yes, if we knew that that would be great but we don't know, we don't know these risks. All right, so I want to talk a little bit about getting over it and accepting it. Jim Baker, who's the former FBI general counsel, who was, you know, basically the FBI, was the FBI's lawyer during the Apple versus FBI fight, wrote a piece in October of 2019. I don't want to say he exactly changed his mind, but he changed his mind. He was like, okay, encryption's here to stay, now we have to deal with it. And some of that was based on a political calculus. All these years Congress hasn't done anything and they're not likely to do anything right now. And some of it was based on a security calculus. I think one thing that was influential for him was the idea that 5G networks are going to be rolled out. There's not, you know, confidence or assurance that the devices that that network is going to run on are going to be trustable. How do you protect yourself in a network where you have like a zero trust and you can't rely on anything? And so I think he became, you know, sort of understanding more that like current technological advances, the current situation is going to require strong encryption and backdoors are just not going to work. I think that the idea then is what are we going to do? So Alex Stamos, who's the former Facebook CSO, is now at Stanford and has the Stanford Internet Observatory. And from his experience at Facebook, Alex has been asking the question, okay, let's assume in N10 world how do you fight abuse online? Both as a preventative question and also as a, you know, how do you find out that the law is broken afterwards? I mean, I think the goals here are to say like, okay, the fact that we have N10 encryption doesn't mean we have to throw up our hands on security and just accept whatever there are still things we can do. Now, obviously law enforcement issues are not the only ones. You know, there's hate speech, there's harassment, there's trolls, there's false accounts, there's disinformation and different suggested remedies will apply more or less to different ones of them. I think the point I want to make though is that I've said this is a step forward. I think saying like, okay, let's accept that encryption is here and we still want to live in a beautiful world. You know, it's one step towards the beautiful world. Let's keep making it more beautiful and what are we going to do? And I think that one of the things we have to be very careful about particularly given the fervor of the political debate and I think as you can tell, I don't think that right now we have the upper hand in the N10 encryption policy debate. One thing I think we need to be careful about is not to trade away things or like accept compromises that we're going to be sorry that we have later on because these compromises also create privacy and security harms. And if we're thinking about, I mean I'll put it this way, you know, cryptography and cryptanalysis is an exciting intellectual wonderful academic field. But it has this real world political valence and you guys all know that, that's why you're all here. And the reason why it has that valence is because at least one of the reasons is because of communication security. And so I think it behooves us all to look at the problem of communication security more broadly as solutions are offered to the policy debate about N10 encryption interfering with law enforcement needs. So a couple examples. One, law enforcement hacking. Backdoors are bad but I think law enforcement hacking is really bad too. To me the issue is not so much a privacy issue. I think we're all familiar with that but a cyber security issue. If the government incentivized attacker on the network, you're going to see vulnerability hoarding. You're going to see participation in the market for vulnerabilities with money flowing to groups like NSO group or Fin Fisher. And these are businesses that have been associated with human rights attacks and human rights abuses by their clients, the other governments. Then I think also, is this something that companies that require their users trust are really going to want to stand up for? I think maybe everybody in the audience is aware of Facebook and WhatsApp sued NSO group for attack that was executed through WhatsApp servers. And they sued NSO group under the Computer Fraud and Abuse Act which people who know me know is my least favorite statute in order to basically say we don't want our platform to be a vector for this. Dependence on metadata is another to my mind unwelcome trend. If you go to Congress and you say you don't have to regulate us because we're going to use this metadata in all these ways, there's a real disincentive to protecting metadata better or to not collecting metadata because you're basically saying here's what we're going to do in exchange. Are encryption backdoors worse than over collection of metadata? I think yes, but is over collection and over retention of metadata a real privacy and security problem? I think also yes. So the message I'm hoping to give here is that we can't be complacent about this issue. There is a real lack of expertise still on the part of policy makers. It is not ubiquitous among the government and I think that the contribution of people who care about communications security and security experts is invaluable in this field. So I hope that gave people a good overview of where we are now and thank you and can I take one question? Is that right? Okay, and we'll do some questions now so thank you. Can you speak to the issue of corporate asset use and corporate rights to knowing how the systems are being used? I believe that it's been so long since I've been involved. I had a corporate hat on and having to deal with the Email Privacy Act, which allow corporations access, rightful access to all the email used inside the corporations. Can you speak to the issues in balancing the corporate right to knowing how their assets are being used versus the antenna encryption and the rest? Yeah, the way that's usually been resolved in the law is through notices or terms of service, which for all their weaknesses have been the way that we've dealt with who has access to email. So on commercial platforms that are offered to the public, for example Gmail, you click yes and it says we can scan your email for our business purposes or spam interdiction or for advertising or whatever. And those are the things that have been really upheld in the employment context or even in like a university context that equally is true maybe more so. And then the only question is, is there other protection that's applied, for example from law enforcement access and that protection is either in, if there is other protections, it's either in the electronic communications privacy act or ECPA. It may be in the Fourth Amendment depending upon the circumstances and there are some state laws now like Cal ECPA in California that also provide some additional protection. But I think that your overall what I would say is that your expectation of privacy vis-a-vis corporations is in the United States, not really anything, that may be changing as the GDPR in Europe kicks in and those more protective data privacy regulations are applied by global companies throughout all of their users. On the situation where law enforcement does not have enough access to plain taxes as Dave phrased, I am curious in your circle of law professionals, folks who do policy and involved in creating laws and these kinds of things. Has homomorphic encryption ever come up as a possible step forward or a way to alleviate the problem? No, it hasn't. I don't think that people who are in the policy debate in particular have that like level of sophistication. I think that the technological sophistication has sort of been at the level of let's incentivize slash force companies to figure out how to do it and however they do it that's their problem. And haven't really talked about, you know, what we as the policy makers need to figure out the answer to that. Before we go to third and fourth question, can next speaker be prepared for the microphone, please? Thank you. So given that the law alone is not sufficient to protect human rights, if we lose this policy battle, to what extent do you think we might need to use math and technology to try to subvert the law or to try to engage in civil disobedience in order to protect a free society? Yeah, I think that's a great question. In particular, I've been thinking about that in connection with the Hong Kong protesters and the ways that they are looking to use communications technologies to organize or to defeat face recognition. And I think that technology has a long history of being used both for oppression and as a technology of resistance and empowerment. My colleague John Callas, who I think many of you in this room may know, he works with me at the ACLU as a technologist, is currently working on a paper about exactly this issue which is what are the technologies of resistance that help people protect their own civil liberties in a context where the law isn't doing that job. So I guess what I would say is check back in with me in a couple of months and we'll have something great for people to look at then. Thank you. And then I think the last question. No, there is one on the top. Oh, I'm sorry. Okay. Oh, so I'm sorry. We'll go. Who was first? Okay. The person on the top. Sorry. Thank you so much. It's very hard for me to see. No prejudice meant. Thanks so much especially for your kind of longer term view of all these issues. One question that I have is how much do you see the current debate around intent encryption being symptomatic of the current political climate and the current administration in the United States? And to what extent do maybe we just kind of like have to hold the line for maybe a couple years, maybe a couple more years. And if we can get past that, then we can kind of address all these longer term issues and we might have kind of survived this recent iteration of this kind of multi generational kind of war that's happening. Yeah, I think that's a great question. I don't think this is a problem of this administration. I mean, we saw this issue come up in the Obama administration. We saw it come up in the Bush administration. We saw it come up in the Clinton administration. And then I think that it is a battle that will be ongoing. And, you know, unfortunately, and I think that's one of the reasons why I think we want to be very careful about how we hold the line here. Because if you let the policy battle kind of chip away at little things, we're never going to, we're never going to win. It's not like, well, we'll give this away and we'll preserve this because the Department of Justice will still be here caring about this stuff like 35 years later and I'll be retired. So we need to expect that this is a continuous ongoing policy battle and be prepared for it personally and also view it as something in the long run. So more than a couple of years I think you're going to have to hold on. Thank you. Yes. Okay. And brief answer please. Okay. Yeah, thank you for your talk. So I read the New York Times pieces. They were extremely difficult. It's something I think a lot about as a technologist who designs cryptography systems. And I guess I sort of have the opposite question, which is in our community we have a lot of discussions about holding the line and making sure we don't compromise and making sure that we have secure systems. But I guess I also wanted to ask how can we listen to people who have problems, to people who have been trafficked online and how can we do things on our end without compromising just general privacy? Yeah, I think that's a great question. And I think I may have made a mistake or have been at fault or committed this crime myself in that we sort of portray there's strong encryption and then there's people who are hurt. And I know you weren't implying this but I'm sort of correcting myself. And the issue that we all know is really there are people who are hurt without strong encryption and there are people who are hurt and the perpetrators are more likely to be able to get away with it or we don't know that it's happening also because of encryption it's just not seen. Right. And so the question is how do we manage to deal with both of these? And this is one of the things I think that Alex Stamos is trying to do which is to basically say can we learn from things like social graph analysis or metadata that these are people who are likely victims? Can we take the information we already have and use it to identify perpetrators and how are we going to go about doing that in a way that is civil liberties friendly and also adequate? And I think these are easier questions to answer when it comes to things like disinformation or fake news or trolls or that sort of thing. But I think that there's a sort of active conversation going on now about ways to do that in an end-to-end world. And I think that in a lot of fronts this is something that's becoming accepted by some people including the former FBI general council that that's the world we're going to live in and so we need to take that responsibility seriously. Thank you. Thank you, Jennifer. Thank you. You're a great hand. Thank you. That was a great discussion. So we'll move on to the next session on voting. We are happy to have another invited talk on weaknesses in the Moscow internet voting system by a PYRIC. Thanks for the introduction and thank you so much to the organizers for inviting me and for giving me this opportunity to talk about this work. So this is joint work with Alexander Golovneff from Harvard. Actually, I started with one attack on the Moscow system and he came with another one and then we discussed together trying to understand what was going on. And I should also mention Julia Kribonozova from Estonia, so she's Russian, and everything started thanks to her because she actually advertised all this stuff about the Moscow internet voting system with a blog post written in English and everything else was written in Russian. So I mean, I would have no clue about what was going on without her. I should take this. So the plan for today is first I will give a bit of context about this Moscow internet voting system and then I will describe the encryption schemes and the vulnerabilities that were in there. And then I will try to give a hint about the more general picture about this voting system because actually the encryption scheme is part of the system but everything was complicated. Okay, so what was the context? Elections in Russia in September. There were some global things for local elections in Russia and specifically in Moscow the goal was to decide to elect the city parliament, the Moscow Duma. So this is the legislative part of the power in Moscow. There is also a city government with mayor for the executive part. So this city parliament, this Duma has 45 members and each one is elected by one district. So the general area of Moscow is split into 45 districts with a bit of things like tech and gerrymandering just to adapt to what you want for the result. And each district elect one representative, one member, and all the districts have more or less the same number of voters. In total this is a bit more than 7 million voters. And the rules are pretty simple. You have candidates for each district and the candidate who gets the largest number of votes gets the seats. So you have, as you can guess, this kind of threshold effect. Even if there are 7 million voters, maybe a change by 100 or maybe 1000 votes can really change whether or not one guy gets the seat rather easily. Especially because the turnout was not so high. Anyway, in September this year they decided for the first time for this kind of election to have some kind of internet voting experiment. And it was restricted to three districts, three among 45. And any voter could register in advance to use this internet voting system. No need to be abroad or to justify in a way or another that you wanted to use it. So it was the rules and they expected something like 10,000 internet voters. And that's what they got in the end. Maybe you don't remember exactly the context of this summer, but this was not exactly a peaceful context. There were several protests in July and August due to rejection of opposition candidacies. And there were up to 20,000 participants in one of the rallies. Maybe with my French point of view this is a tiny number of 20,000 participants. Anyway, it was really not good conditions to do experiments, but in a sense, if things are bad maybe you want to change the system. Enough with politics and going back to the technicalities. There were some public testing that was organized with a bunch of programs up to 2 million rubles. So this is about $30,000. And the source code was made public. I mean part of the source code. Only some of the source code that was run here and there, we didn't have anything about the infrastructure. And the various scenarios of attack were proposed with some document code formal offer, the PDF, but actually don't trust this name. Everything was written in Russian so it was actually not so easy to really understand what was going on. And also in this GitHub repository you had the source code, which is of course in different languages independent of the human language, but all the comments, I mean maybe a lot of the comments and there are not so many. And no documentation was written except a few words in Russian. So I think this is the main difficulty. It was the main difficulty. There was no specification, no documentation, just part of the source code. And this testing was not really part of formal certification process. It was not says like if there is this kind of attack and blah blah blah, maybe we stop or it was kind of let's organize this and we'll see what comes out. And in the end they decided anyway that I think from the beginning they had decided that they would run the election on the experiment, whatever the output was. Okay, so the timeline for this summer. So as you can see the code publication was done 17th of July for an election which was less than two months later. This is incredibly short. And furthermore it was announced in Russian. And so Julia Kivonosova wrote a blog post pretty shortly thereafter and it was advertised to the academic voting community. But somehow it took some time for me or for others to really realize that there was something there and furthermore I was on vacation. So it took me some time but when I came back I had to look at the code and then there was some attack. So I published them. I mean of course I told the Moscow people but also I wanted really to be sure that they would do something. A few days after I released the thing on Archive that they had to change something. And they did. They did the first update a few weeks later. Two weeks later I'd say. A bit in a rush. And then a few days thereafter Alexander Golofnest found another attack on this new version. And then it was getting pretty pretty close to the D-day. They did some last test, last minute, last public test. And they finally updated the public code two days before. OK. So what were these attacks? The encryption scheme in this system is based on El Gamal. So that's a very classical encryption scheme. I recall it more or less for the notations because they implemented a variant of it. So I need some notation to explain the variant. So they used Z over PZ with safe primes. And I used G for the notation of the generator. SK and TK for the public key, secret key. And then the El Gamal encryption is just... You consider this public key as some kind of first half of a D-field man. You do the second half yourself. And then you use this shared key to one-time-pad your message, which is supposed to be in the group. And of course, in your part, you need to do it properly. We report per random to be used only once, blah, blah, blah. The encryption is then undoing all these things. Very, very easy. If done correctly, there are many ways to do it badly. This gives you in-CPA security, which is, of course, not enough. But in many e-voting systems, we actually use El Gamal still with some extra stuff to get more than in-CPA, like in CCA. Here they didn't. But this is... This is not the main problem. Wait, wait, wait. Why did it go so fast? So what they used inside was not plain El Gamal. They used triple El Gamal that I had never heard before and for a good reason because... So here is how it goes. You have three primes, P1, P2, P3, all of them being safe primes. And you use three independent El Gamal settings with G1, G2, G3, three generators, and then the secret keys, the triple of three secret keys, one for each group, and the public keys are the corresponding keys. And when you want to encrypt a message, so the message is supposed to be in the first group, Z over P1, Z, then you do first El Gamal encryption of this message M, you get A1B1, a pair of classical pair of encrypted message in the El Gamal setting. You map the first element A1 to the second group so that you can encrypt this first element with the second group parameters. Then you do it again with the third group. So this is kind of chaining. You have your message encrypt once, encrypt twice, encrypt a third time. And then, of course, since this El Gamal encrypted message are pairs of elements, you need to actually send also the BI part. So what you get as the encryption of a message is B1, B2, B3, and the A3. Here I didn't say how to map an element of Z over P1Z into Z over P2Z because actually there is no natural map. What they do is that they lift to Z to the integer and then they reduce mod P2 back. This is all implicit in the source code. Of course, there is no proper description of that. All the maps are implicit because they didn't use any abstract, proper group setting in their implementation. This is all big integers, so you don't really see these maps. And the encryption is just natural. You just do it in the other way around. And of course, you need this inequality P1 less than P2 less than P3 so that during all these mapping you don't lose information. But yes, this works. So this condition P1 less than P2 less than P3 is indeed enforced in the source code. But without explanation. As for security, that's interesting. Well, contrary to triple less, where the number of operations to break the system is kind of squared, here this is not raised to the cubed or raised to the power of two. It's just multiplied by three. So breaking this system is not harder than just breaking the free underlying El Gamal completely independently. So in terms of security, well, there is no point in doing that. So why? So we didn't get the final answer. Of course, I asked and asked many questions to the Russian. But I speculated. And all the PIs are chosen to be less than 256 bits. And this is enforced in the source code with this kind of comparison. Like P should be less than the solidity max int value, which is 256, maybe five. Maybe this is signed integer, I don't remember. But solidity, as you probably know, is the smart contract language of Ethereum. And in the code, there is indeed a decryption function written in this language. So there is some kind of smart contract doing the decryption. And the max size is indeed that one. And it looks like the author didn't have time or the competence to write a multi-precision library in solidity. And they decided to increase the security in another way. That's my guess. I mean, I didn't get the answer to my questions. But you'll see later that I have good reason for thinking. So basically, why blockchains? OK, so we are left with the problem of solving a DLP module, a prime of 256 bits. And the challenge in Russian says that it must be done in less than 12 hours. So that you have really broken the system. So, historically, breaking such a discrete log was done maybe 10 years ago, maybe 20 years ago, maybe. Well, there is a database that we host in my group where we have all the public computation. And it was done first in 95, 96 by Weber, Denny and Zaire. It took some time, the date, of course. Just to remind you, the current record is a bit less than 800 bits. We announced that two months ago with my colleague. So, but this current record took ages and ages. It didn't take less than 12 hours. How much time does it take today to solve this 256-bit discrete log? Is it less than 12 hours? And what we can do is just check with available, publicly available software. So, I checked with three things, SageMath, which uses GP Parry internally for DLP. So, this is free software, and this is actually not bad for small sizes. But for 256, it did not finish after four days, and I stopped the job there. I tried with Magma, which is not free software, it's proprietary software from the University of Sydney. And this is faster than SageMath, but this uses a lot of memory. And for these sizes, it took 24 hours with 130 gigabytes of memory. That's a bit too much. And then, of course, I've tried Cado NFS. Well, actually, I tried Cado NFS first because I am one of the developers. So, this is free software developed mostly in our group in Nancy. And this is based on the number field sieve. The two first things use the variance of the quadratic sieve. But I guess for this kind of size, this is not clear that the number field sieve is faster. But, well, actually, our implementation in Cado NFS is much faster. And this is the same actually code that was used for the latest record I mentioned one slide ago. So, for 256 bits, it takes less than 10 minutes in less than one gigabyte. So, this is far less than what was expected. And more precisely, I used my just nothing special desk PC for this. And you see that this is just hundreds of seconds to compute these secret keys. So, I did it completely independently for the three keys. I didn't care about the multi thing. Just multiply the rain time by three. Nothing. Well, maybe I can skip this. But just I wanted to mention that for this kind of small sizes, actually, our software was not so much fully tested. We usually use it for record computation. And here, actually, it was failing from time to time due to bad parameters in the decent phase. But this was quickly fixed for the occasion. And then it was pretty stable and could really solve these kind of sizes quite easily. So, they fix it. And so, the first fix, first of all, they removed the triple y gamal encryption. They increase the key size to 1024 bits. And they changed the protocol so that the encryption was no longer part of the smart contract. So, they didn't have to implement this multi precision in the Solidity language. That's why I really think that this was the reason. Because there were no other reason that, I mean, so close to the real election to change the protocol. I mean, it's dangerous to change a protocol even if it's done months before. But here it was only days before the day. So, really, for me, this is the kind of confirmation that this limit in Solidity was the reason why they asked this triple y gamal and these short keys. And the last thing they did is that they took generators in the primary subgroup. Because, actually, that was also one thing that did bad in the first version. They took generators for the full subgroup, for the full Z over PZ star group. And so they were potentially leaking one bit due to subgroup attacks. And so they fixed that. So, I mentioned that in my first note. And they fixed that. But they fixed that badly, actually. They failed again. So, Golofneth noticed that actually, okay, the generator is in the primary subgroup. But the message stayed out, I mean, in general, in the general group. So, one bit is actually leaked from the ciphertext. You know, for sure, if the message is a quadratic residue or not. And, well, this is one bit. And one bit could be actually very important in voting. Quite often there are actually only two main candidates, one proputin and one oponent. If one has an idea that is a quadratic residue or not the other, then, well, you have no privacy. Okay, this is in the clear. And, yes, from the source code, the encrypted message is indeed an identifier of the candidate with no random notes, nothing special. So, really, this attack was realistic. This typical scenario was completely realistic. From here, we are getting pretty close to the election, and the situation got completely chaotic. I mean, I did not understand what was going on. Many things were happening via the press, via interviews, everything in Russian, and I did not really follow everything. So, my co-author was understanding at that point much better than me what was going on. And the developers seemed actually to deny that this second attack was a real one. Still, the scientists changed the code without updating the GitHub so that they could run a final public test with this modified code and say, oh, you see, there is no problem. And only two days before the election, they changed the GitHub. So, why do we know that they silently changed? It's because during the public test, of course, they have to send the JavaScript to the voters that they can encrypt, and from reading this JavaScript, we could really see the patch that they did. And this was minified, but this was not obfuscated, so it was not too hard to really understand what was going on. Okay, so this was the story with the encryption scheme. In the end, I guess they had something that was reasonable in terms of encryption, still only in CPA, but, well, depends on how it's put in the general protocol. And actually, talking about the protocol, it was pretty difficult to understand how this protocol was going on, because the source code was not about the protocol. It was pieces here and there. We didn't have the whole picture. And so everything that follows is based on speculations, discussions, press articles, source code, something like that. My general impression is that the whole protocol is really, really bad. As far as I understand, there is no privacy. Verifiability, a bit, they use blockchain for that purpose, but actually you put your vote in some public later, and then you can check that this is really there during the tally. In terms of coercion resistance or vote-buying, there was nothing. And it's a big issue in such a context where there is tension. So my conclusion is that actually this encryption being fixed is just like having a very strong door, but the house is really, really open. So what did we get about the protocol? The registration, authentication part of this protocol was based on existing infrastructure for the administration in Russia. And nothing specific to this election. And I couldn't really get any hint about how it was going, but this is not specific to the election, so let's assume it's okay. The voting part. So the voter connects to the server, gets the JavaScript from it. Then the choice is encrypted locally in the browser or the smartphone with the key of the election and sent back. So this is where this encryption that was completely flawed at the beginning got involved. Then you have the ballot box. And the server received the encrypted ballot. And it puts it in the ballot box, which is basically some kind of Ethereum blockchain. And with no reference to the voter. And to try to make this no reference to the voter more reality. This is a bit randomized in terms of time. I mean, the server would wait to have dozens of ballots. And then randomize a bit. And then send them to the blockchain a bit later in a different order. And at the end of the election, the trustees who own the decryption key, and they use secret sharing for that. And they decrypt the ballots and put them in the blockchain as well. But somehow they also published the decryption key. So this is for verifiability actually. This is their proof of correct decryption. I discussed about them. And they say, oh yeah, zero knowledge would be nice. Yeah. But actually, in fact for the designers, this encryption is here only to protect against revealing the partial tally during the election day. It's not there for the privacy. And I realized that only late because without the specification, it's not so easy to realize that encryption is just for almost nothing. So the privacy is guaranteed by the server. The server is honest and will cut any link between the ballot and the voters. So the voters connect, authenticate. And the server checks that he's allowed to vote, blah, blah, blah. Then you receive the ballot. And whichever should forget everything and put the ballot in the ballot box. So if you want a bit of redundancy, if you want a bit of, I mean, if you have databases everywhere on the backups, it's almost impossible to have this kind of cut in the link. But well, that's what they have in mind. And that's why also this limit of 12 hours for breaking the system was there because actually 12 hours was the time before, okay, we are going to decrypt and reveal the private key anyway. So this was the reason. It took some time to understand that. Okay, further remarks about the general system, the blockchain. So first of all, I have no idea why they wanted to have the decrypt made in the smart contract in the beginning. I mean, it was probably, oh, okay, because we can. Well, but in the end, they had bad consequences. More of a problem. The blockchain was a private one. So just run by several nodes that we are not publicly known and no guarantee that they are really independent. So actually, from my point of view, that it provides no guarantee whatsoever. And during the election, the voters could query the blockchain but via web servers. So I don't know. My impression is that it was just for the general theater that they had this blockchain. And the second remark is that the interaction with the Moscow Department of Information Technology, the people who designed, developed and deployed this system, it was very frustrating. They were very nice and polite and everything, but only to some of my technical questions only those that we are not embarrassing. So I asked several times, can I get some documentation, specification? What do you assume in terms of security? What are your trust assumptions? Do you trust the server? Do you care about corrosion resistance for these kind of questions? No answer. I mean, they just ignore this part of my English. I'm waiting to interact with them. What occurred on the D-day? So they did not consult the user of internet voting. They got, as I said, 10,000 votes. About 10,000 votes by internet. And in one of the three districts, the difference between the first and the second candidate was less than 100 votes. So actually these 10,000 votes could have made a difference. I mean, we don't know. So this is really, it proves somehow that this system was used for something very important. I mean, it could really change the color, maybe not the color, but at least one of the elected members of the parliament. And a few hours after the election, the access to blockchain was cut. Okay. So this is, I mean, public ledger. I mean, blockchain usually you think, okay, it's going to be there forever. Everybody is going to be able to check that what was there has not been erased. But if you have private notes and that at some point they disappear, you get nothing. But fortunately, journalist of the mid-user online newspaper. So this is a newspaper that is exiled in Riga. And they made a copy of the blockchain before the shutdown. They got all the ballots and all the decryption key and they met that public. So, and one of the journalists who was also a voter managed to keep track of his network communication during the voting and could indeed check that his ballot was really in the blockchain and was taken into account. So there is some kind of verifiability. One people with a lot of skills could really follow his vote. Which is not so bad. I mean, I'm joking, but in France we don't have that. So, at the conclusion, what was the impact? I guess our impact was not really fixing the encryption. The whole system was a big mess. In the end, the encryption was not really for privacy. But what we showed was that it was not the perfect skin that was advertised at the time. I mean, oh yeah, with the blockchain this is wonderful. If voting is going to be perfect, it will solve our problem. No, it's not like that. We got large press coverage, especially in Russia. I didn't understand everything. What I got is that Medusa people, these are really, really good. These are good journalists and they know about what they are talking about in terms of techniques. They are really, really good. Moscow had to acknowledge the problems. My hope is that some of the voters decided not to use the internet voting system, but use traditional voting. On the longer term, I'm pretty sure that this story will help putting some pressure that they get a better system in the future because they plan to do over experiments in the future. I would like to conclude with a positive note. Really, it's amazing that they organized some public testing on the open part of the source code. Not so many countries do that. I mean, Switzerland is a big exception, but apart from them there are not so many countries that do that. This should be commanded and maybe you could take that as an example with your own governments. Even Russia opened their source code when they are doing e-voting. I guess, please conclude my talk. Thank you. Thank you for a great talk. There are opinions I heard even here in the US that for a bit better security of voting, if you do it on a large scale, some parts, not everything, but some parts maybe should be still done using some physical things like cards, bulletins, well not everything, but just a bit to prevent certain sort of manipulation. Do you kind of support this approach and if yes, which part would you make a bit less digital? I've heard about this position, especially in the US, but in the US they use voting machine and then this is really... I mean, I think it really makes sense to have also voted machine giving you some kind of paper receipt that you can have risk limiting audits and this kind of stuff. So this is really good. I really support this opinion in the case of voting machine. For internet voting, how do you get pieces of paper? So for instance, in Switzerland, they receive some piece of paper for some voting materials by the post. So then you have some trust assumption on the postal system, but yes, why not? And a more general answer is that I don't think internet voting is ready for deployment for a very high stake election, especially if you want to combine coercion resistance, verifiability and all the properties that you would like to have, we are not yet fully ready. So this is some kind of answer. How did they determine that the votes that were cast were cast by people who were allowed to vote? So this is the... I don't know exactly the details. This is what I call registration authentication. This is based on some kind of infrastructure already existing for E-administration in Russia. So I don't know the details. But this is nothing like a physical device, like an electronic ID or something like that. This is just, I guess, based on passwords and things like that, that you own as a citizen. But I don't know the details for that. To be registered as in point one, you still have to present some physical things, so you cannot do it fully online. For the first time, I guess. But after you... After you can proceed as you want. I've got two questions, but feel free to only take one of them. The first one is... Is there a good open source e-voting system anywhere? That was interesting. And then the second point was just to pick up on your comment just now, which was to say that you don't think internet voting is ready. I'd be interested in to hear why you think that. OK. For the first question, I guess it depends on your scenario. It depends on your context. If you want to have... If you don't care about coercion resistance, for instance, then there are systems. And my favorite one is one that was developed for the Geneva content in Switzerland. It has never been deployed, but the source code is there. It's available. You have plenty of documentation, a huge security analysis. And this is a very good one. But in Switzerland, they don't care about coercion resistance. And I guess the reason why this answered the second part of your question, in many cases, you also want coercion resistance. And if you want to combine coercion resistance with all the other properties, then this is really, really hard and usually not practical. For the layman, it will be completely impossible to understand what's going on. I think it's relevant, in my current understanding, of the state of the art. So we will continue on how weak it is internet voting today. And then the speaker is Olivier. Good afternoon, everyone. Good afternoon, everyone. I will keep talking about e-voting. The main topic of the previous talk was mostly about privacy. This one will be mostly about verifiability. I will focus mostly on the Swiss case with implications to Australia and other countries. So, Switzerland is an interesting country for voting and for Internet voting in particular. Internet voting has been used for government elections, at least trial for government elections since 2003, so there is a long history there. Many systems have been trialled, and if you go to roughly the fall of 2018, only one of those systems was left partially for financial reason. It's expensive to keep funding the development of several systems, and that system was Svote, which was commercialized by SwissPose and developed by Cytl. During the fall of 2018, SwissPose decided to enter a new certification process that would authorize using a new version of Svote, Svote version 2, by up to 100% of the voters in Switzerland. There were several conditions as part of this certification process, and one of them was that the election should become completely verifiable, meaning that voters should have a way to verify that their voting intent has been properly captured by the system and that the tally should be verifiable by any auditor willing to do so. The second condition is that the voting system should pass a public review. So the public review phase started in February 2019, and at the end of March, the Swiss Federal Chancellor and SwissPose announced that internet voting would not be available for the upcoming May elections. It was not available for the federal elections in October, and it's not known when it will be available again in Switzerland. So this is essentially the story I would like to tell you about now. So, like I said, this certification process included a public review phase, and it was regulated, and one important part of this regulation was saying that anyone should be entitled to examine, modify, compile and execute the source code of the voting system and to write and publish to these the wrongs. So that's really a very good and interesting thing. What it became in reality when SwissPose decided to publish the code is that if you wanted to access the code, you needed to register and accept some conditions of use that included a responsible disclosure condition, and that included in red that no vulnerability should be published within a period of 45 days since the last communication exchange with the owners. So that essentially means that you have to report the vulnerabilities to the owner, which is quite natural, but then if the owner decides to send you an email every 44 days, they can basically silence you forever. So we were not very happy to sign such responsible disclosure conditions, and apparently orders fell the same way, and then a few days after publication of the code, there was a new git repo that was published with a gracious name that was containing all the source code. That repository was taken down a few days later. If you are curious, Sarah, my coder, is hosting a copy of all the code and also of the attacks that we found if you want to take a look. So we accessed the code thanks to that repo, and we started looking at how the system was working. It was a remarkably sophisticated and complex system. One of the core components of the system was a trusted printing service. So that trusted printing service was doing actually two things. The first is essentially what the name is saying. It prints paper ballots. So those paper ballots contain really a lot of things, and in particular for this talk, each paper ballot contains one unique key key that is printed there, just an AES key, and then it also contains for every possible choice that you can make on the ballot one return code. So the return code is just a random number, three, four digits, and again it's picked just as random for each ballot and for each possible choice that you can make on the ballot. So the voter receives this paper ballot, and then on election day they can vote just using the internet. They would just connect to a web service hosted by Swiss Post. They would make that choice just by clicking on the candidates they would like to support, so just completely natural interface, which means that the voting client needs to be completely trusted for privacy. If the voting client is compromised, well, the computer sees exactly where the voter is clicking, the voting client knows the vote. So the voting client is trusted for privacy. The goal of the system is to make sure that the voting client doesn't need to be trusted for correctness. So we want to make sure that if the voting client is corrupted and when I click for A it actually encrypts a vote for B, I have a way of seeing that. So this is the second job of the trusted printing service to make sure that this will not happen. And the trusted printing service produce a big pile of ciphertext here on the right. So those ciphertexts are computed as follows. For each individual paper ballot, the trusted printing service will take the key that is printed there, derive a new, well, symmetric key from each of the possible answers, so fk of no fk of yes, and use that key to encrypt the return code that is associated to the choice on the ballot. And then the last thing that the printing service will do is shuffle of those ciphertexts. So at some point in the election, the server will decrypt some of those ciphertexts. And the goal is to make sure that when you decrypt one of the ciphertexts, you do not know to which choice the ciphertext corresponds because, well, the ordering is broken. So what the voting client does is, when the voter votes, let's say, for a yes, it encrypts a yes, and it also encrypts fk of yes. So the voter will also enter the key, but not the return code into the voting client so that the voting client can compute fk of yes. And those two ciphertexts are sent to the server. So now there is a distributed decryption process that will happen on the second ciphertext only. So the server will learn fk of yes. That's a key. It doesn't know which ciphertext can be decrypted from that, so it will just try to decrypt all the ciphertexts here until one decryption succeeds. So it will obtain 472, send 472 back to the voter, and the voter will check on the paper ballot that this is the right number. And this is expected to guarantee that the voting client did not cheat. So the voting server will do other things. So when the election is done, it will run a verifiable mixed net with all the encrypted votes so those first components of the ciphertext. And it will perform also in a distributed way decryption of the mixed ballots to obtain the election tally. So of course if you want this to work and have the verifiable properties, you will need to add to this a lot of zero knowledge proofs. So there were actually three main types of zero knowledge proofs using the system. One was to make sure that the ballots were properly built in the sense that when you have those two ciphertexts, it is needed to make sure that it's the same yes or the same no that is encrypted on both sides and not something different. So that when I decrypt this with the yes, I'm sure that there is a yes there too. So this is proven in zero knowledge. Another place where we need to have zero knowledge proofs is to make sure that the mixing is correct. So we have proof of shuffle. And then we have proofs of correct decryption of the mixed ballots. So we took a look at those zero knowledge proofs and what we found is that not a single one of them was actually sound. So there were actually three independent types of mistakes with those zero knowledge proofs and I just would like to give you a quick look at the kind of difficulties that we found. So the first is with the Fiat-Charmier transformation. So all those zero knowledge proofs, they are basically sigma protocols. And in the interactive version of the sigma protocols, you have a prover and a verifier. They all receive the statement about which we want to make a proof. The first step of the protocol is a commitment A. Then there is a random challenge E and a response F. So that's not convenient in a voting system where you want to have verifiability at any time. So we want to make those proofs uninteractive and the traditional solution is to use the Fiat-Charmier transform. So the idea is to remove the interaction where you send a commitment and receive the challenge and instead send a query to a random oracle with this commitment, get the challenge, and then you can send to the verifier the proof in just one single pass. So that's all good. Except that in this voting system you have a slightly different setting. The prover and the verifier, they do not receive a statement in advance. It's really the prover who builds the ciphertext who can choose the statement about which you want to make a proof. So there is no S coming to P and V. There is a prover who picks the statement and sends the statement together with the proof to the verifier. And that small change is just enough to completely break the security proof and to have zero-knowledge proofs that are not sound anymore. So basically it's easy to fix because you just need to send a statement with A to the random oracle and then you have security again. So that's exactly what is here, what was done in this S vote protocol. So that was enough to essentially break all the proofs. It breaks individual verifiability because now we cannot trust any more the proof that when you send those two ciphertexts that really have a consistent encryption you have two yeses or two no. It also breaks universal verifiability because now when you decrypt the ciphertext you can claim that the ciphertext can be decrypt to anything you like and make a proof that you made a correct decryption even if you did not. So that's really a bad thing. So it's easy to fix. The problem is that it's not the only problem. Second problem, remember that in those sigma protocols the first step is sending a commitment and one traditional solution for making those commitments is to use pearlescent commitments. So pearlescent commitments, that's how they look. You have G to the S time H to the M H to which you want to commit S is a random number. So this is supposed to be perfectly hiding if you have a random S. It is only binding if the discrete logarithm of H in base G is unknown. So of course you need the commitment to be binding otherwise the soundness of all your 0.8 proofs collapses again. So we looked in the documentation in the voting system on how do we pick the G in the S? We found that in order to select the H well you basically pick a random exponent R and then you compute H to be G to the R which is exactly the discrete logarithm that you are not supposed to know. So basically the recommendation of the system is just start by picking the trapdoor that you need if you want to cheat and then run the system and people will have to trust you that you do not use a trapdoor so basically that again made it possible to cheat with the 0.8 proof and in particular in the variable mixnet we could just change as many votes as we wanted during the shuffling process and prove that the shuffle was honest even though it wasn't. So these were essentially cryptographic issues with the proofs but there were more and actually even if you fix all the cryptographic things in your proof you have the right statement you are still in trouble. So remember I said that we needed to prove when we prepare a vote that we encrypt the vote and then we encrypt the key that can be used to encrypt the return code associated to these votes. So of course in a ballot typically you will have more than one choice to express so you will have many pairs of such ballots and you would expect that you would have more than one choice but they decided to prove that instead the product of the vote matches the product of the keys which is of course much weaker because well product is commutative you can have divisors that flow between components basically you don't know what to do with that so we started looking at how we could use this basically it means that we can again make proofs for statements that are just completely wrong but it was a bit unclear what would happen with the system in that case because it's just a situation that should not happen in the system so it must not be documented what would happen when you decrypt something you see a plain text that should not come at any point so the effect is a bit unclear but there were also security proofs given for the system and basically they assumed that you are proving the right thing so the security proof just collapsed. So these are three basic examples of the three different kind of things that we found there were several other things but basically the problem when we found that was okay what do we do with this how do we report about this so like I said we did not want to register with the review process because we did not want to sign this unlimited NDA so what we did is to send an email to the Swiss Federal Treasury saying we found some code on the internet we don't know if it's authentic so please so please have a look and tell us what to do so we had some discussions and then two weeks later we made simultaneous public statements the Swiss Federal Treasury Swiss Post and us and thanks to this public release in particular by the Swiss Post we learned that the Peterson commitment issue had been spotted by at least two other teams Wolf Haney from Bern and Thomas Haynes from Norway the next day we learned something different that was quite a surprise to us we saw in the press that the New South Wales in Australia Electoral Commission confirmed that I vote their internet voting system contains this critical cited crypto defect fund in Switzerland so that voting system was to be used in the next few days after this public press release the system was not publicly available so we had no idea of what was inside that system and what puzzled us was the next sentence it was declared to be unaffected and safe for the upcoming state elections so we could not understand how you could be affected by this kind of critical crypto defect and still be completely safe for the upcoming state elections so as it turns out despite the fact that they were unaffected they still patched their system and started running the election so that was Australia so we kept reporting more attacks as we found them and then at the end of March the Swiss Federal Chancellor and Swiss Post announced that the system would not be available for the upcoming elections so they had almost two months before the elections they decided well we don't have enough time to fix while Australia decided well in two days we can fix different options were taken a team at Bern also found more issues by inspecting the codes so for instance they spotted that remember the printing service had to randomize the order of all the cyphretychs that are sent to the server to make sure that when you decrypt one of the return codes you do not know for which candidate it is actually the shuffling was overlooked there was no shuffling at all meaning that when the server they would basically know to which candidate it was corresponding so the conclusion was that there was this election Australia that was completed and the Swiss Federal Chancellor decided to just stop everything and start to completely review the trial and auditing process for this system so two very different directions to conclude I think that this Swiss Federal Chancellor re-ordinance that regulated the review of the internet voting process did a lot of good it gave a view to the public and to the researchers on a very complex and sophisticated voting protocol implemented for real world government elections which is always interesting to have it made it possible to spot many critical issues before any actual use in Switzerland which is good to have it also benefited to other countries at least it made it possible for them to know that there had some issues with the voting system even if those countries decided to not publish anything basically the conclusion is that we need more of ordinances like this to have better voting systems and better elections thank you I was curious when the voters since the encryption of say yes and the second component is it required that the input yes is it required that the inputs match like suppose the voter wanted to so the voter is supposed to provide a zero knowledge proof that the yes here is a yes there or that there is a no on both sides so the voting client is supposed to prove that this is the case so it's still possible for the voter in the end to prove to someone that they voted a certain way yes if not we'll thank the speaker and break for coffee