 So it is my great pleasure to announce the third invited talk for the Asia Crypt 2015 conference and that is the IACR Distinguished Lecture of Phil Rogaway. Phil, as everyone knows, is a professor at UC Davis. He's had an extremely distinguished career in cryptography, written, I don't know, some like 150 papers, many of which are incredibly highly cited and he's had a great impact on the field and he's going to give us all a lot of food for thought today, I think, with his lecture on the moral character of cryptographic work. Thank you, Phil. And thanks to the IACR board for inviting me to give today's talk. I guess they take a bit of a risk when you invite Phil to give a talk, you have no idea what might come out. So what I'd like to talk about today is something that's I've been quite obsessing over since the summer of 2013. And that is the moral character of cryptographic work and the corresponding moral responsibilities of cryptographers and of the cryptographic community as a whole. I'd like to try and set things in context by discussing social responsibility of scientists in general. I think our current conception of the social responsibility of scientists emerged during World War II and its aftermath and three particular events I think were especially formative. The first of these events was the experience of the atomic scientists. After the war, many of these physicists driven by a sense of culpability for what they had unleashed on the world became quite politically active. And well, events like the unveiling of the Russell Einstein Manifesto I think really were a landmark in activism among scientists, that particular one giving rise to the Pugwash movement and eventually Joseph Rotblat's Nobel Prize and Nobel's Peace Prize. A second important historical event was the Nuremberg trials immediately after the conclusion of World War II. Remember that these trials began with the prosecution of 20 physician researchers for experiments on humans, often quite macabre and routinely fatal. Well, the defense proffered that the physicians were only doing their jobs. This was pretty much universally rejected as both a legal and moral argument and many of the physicians were in fact executed. The third historical event occurring during this post-war Cold War period was the rise of the environmental movement. Rachel Carson's book Silent Spring I think was especially formative and she painted this picture of man's future dystopia, not by going up in mushroom clouds but by the kind of slow poisoning of our environment through the overuse of pesticides. I think in the rise of this new thinking, there appeared what one might consider a kind of democratization of responsibility. Scientists and engineers had to be responsible for what they did because if they didn't feel responsibility we'd end up with this world of nightmare bombs and gas chambers and human experiments and a dying poisoned world. I think there are three basic tenets of this ethic of responsibility for scientists and engineers and the first of them is not to use your work to contribute to social harm. The second precept is that you actually should use your work in order to contribute to the social good. It's not enough to do no harm but you're actually supposed to do good with what you know. And the third precept is that these two imperatives don't stem just from your role as a human being but actually stem as a consequence of your specific training and professional role. In our case the obligations would stem from our being cryptographers and computer scientists and scientists and technologists more generally depending on how far outside you go. And I believe that by the 60s and 70s and 80s this notion of an ethic of responsibility for scientists and engineers had really become the doctrinal norm. For example professional codes of conduct like the ACM code of ethics and the IEEE code of ethics would embody this ethic of responsibility. The first two imperatives I listed are the first two imperatives of the ACM code of ethics. There was a rise of non-governmental organizations, things like computer professionals for social responsibility and the EFF and these organizations were interested in science being moral. And I think you could include among these organizations the IACR itself, our founding purpose not only speaks of advancing research, research in cryptography and promoting the interest of the IACR members but also serving the public welfare. This is also an important part of our mission. And in this light I think the figure of the good scientist became a kind of cultural icon. Albert Einstein and Richard Feynman and Carl Sagan and Jonas Salk, these people were revered not only for their scientific work but also because they were seen as somehow deeply humanistic in their beliefs. And yet for all that I've said I don't think that very many scientists and engineers really took this ethic of responsibility to heart. Let me give some examples. Throughout the entire Cold War period in the United States it was never difficult to find the hundreds of thousands of scientists and engineers who were directly engaged in building the munitions systems that were killing people by the droves. Universities like my own actually have run nuclear weapons design labs for years. The University of California ran all of the United States's nuclear weapons design work. When I speak to students and I've been advising undergraduate students for more than 20 years, I've noticed that a concern for the social obligations that they may possess or the impact that their potential employer asserts is never a consideration in how they decide on employment. Almost invariably it's a question of I, what I will get out of this job and not what I will somehow contribute to the world as a result of my work. In a kind of odd turn I think in academia it's now become a common notion that having a normative vision is actually something inappropriate. Stanley Fish, a well-known literary critic, professor and dean, advises professors to stay far, to keep their ethics far away from their academic work. Turning a phrase of Marx on its head he says our job is not to save the world but to interpret it. In the last few years as we've been recruiting faculty members into my department I always ask them to speak on the social responsibility of scientists and engineers. Many of them look at me like a deer caught in the headlights not quite sure what such a question could even mean. One data mining candidate who came in a couple years ago and whose work seemed to me to be this amalgamation of DOD funded work for socially reprehensible purposes quickly answered that she felt no social responsibility whatsoever. I'm a body without a soul she said as though this were somehow an okay thing to say and to be. But I guess none of this would actually matter if what we did didn't implicate politics and perhaps it doesn't. Certainly there are lots of objects that directly implicate politics. I don't think anyone would claim that objects like these I'm showing, this is a stingray device that pretends to be a cell phone tower and vacuums up all of the cell calls in some region. They're used routinely by law enforcement and by others. This is a drone control station so that the United States can kill Arabs from a safe distance and this here is a brochure for a product by hacking team that touts that they can monitor hundreds of thousands of individuals simultaneously. No one would claim that products like these are apolitical because they're quite ostensibly intended to increase the power of authority and yet you might think that cryptography is different after all cryptographic work can be quite mathematical and if you open up the proceedings of a conference like the ones today's it looks about as political as category theory or something, right? It's not ostensibly political. So part of my mission in today's talk is that I want to show you that stuff like this actually is political, that it does embody political sentiment. Maybe I should say that when I speak of politics today I'm really speaking of who has what power in society. The claim that cryptography is political, well I think in some ways this is a claim so obvious that only a cryptographer might fail to see it but that's what we are so I should justify it. I think part of the difficulty is that there are multiple views of what the cryptographer is. To an outsider, well cryptography is probably that which is depicted by the popular press in movies like, you know, a beautiful mind and sneakers and such and in these fictional depictions of cryptography, well cryptographers are the brilliant and handsome mathematicians that power wants to have on its side. I'm happy to report that we're these heroic figures that almost always figure well in the films. A little crazy perhaps but I think that just adds to the luster. And similarly the crypto hobbyist, well he's probably read books like those of David Kahn or James Bamford which show quite clearly that historically cryptography is about power. It's an area in which governments spend enormous sums of money and not unwisely because cryptography determines the outcomes of wars and cryptography also undergrids economic and political maneuverings. But I don't think that any of us would confuse these fictional or historical depictions of cryptography with what we actually do. We hack math and that doesn't seem very political at all. So one explanation for why the outsider might see cryptography as political and the insider might see it as not so at all is because of these two archetypes of what the cryptographer is. Is he some kind of mathematician or some kind of spy? But I don't think this ultimately explains very much. For one thing cryptography actually used to be more political. Here's Whit Diffie speaking at the New Egg Trial. He's talking of his wife and he says, I told her that we were headed into a world where people would have important, intimate, long-term relationships with people they had never met face-to-face. I was worried about privacy in that world and that's why I was working on cryptography. And I believe Whit means this and in his follow-on work and in his work with Martin Hellman, you see this events again and again in their concern for the key length of death or in Whit's book with Susan Lando on the politics of wiretapping or in his introduction of forward secrecy. Even more ostensibly political is basically the entire body of work by David Chom. Remember in 1981 he introduced this notion of mix nets in the paper on anonymous electronic mail and throughout David's career he's spoken of the sociopolitical aspects of cryptography. Here in a CACM article of 1985, Chom writes, the foundation is being laid for a dossier society in which computers could be used to infer individuals' lifestyles, habits, whereabouts and associations from data collected in ordinary consumer transactions. Uncertainty about whether data will remain secure against abuse by those maintaining or tapping it can have a chilling effect causing people to alter their observable activities. This seems quite prescient for something written in 1985. One illustration of how our community has avoided the political is seen in looking at what happened to Chom's body of work, particularly that induced by his 1981 paper on untraceable electronic mail and comparing it with a contemporary piece of work, Goldwasser and McCauley's classic paper on probabilistic encryption. Both of these papers appear at roughly the same time and they actually have very similar citation counts. Yet where they went really differed enormously. You'll recognize the venues on the right here where the Goldwasser-McCauley papers went as us. This is crypto and Eurocrypt and such. The papers on the left, the papers that were most important citing Chom's work, they actually don't go to any coherent community at all. They've been kind of scattered into the wind. You could say there's a simple explanation for this phenomena that Goldwasser and McCauley's work was rigorous. It was easy to build a science on top of it. And Chom's work was not particularly rigorous and that there would be obstacles to creating a scientific theory based on it. I don't think this explanation is valid at all. For one thing, Chom's work would in fact support rigor and the fact that it wasn't provided from the beginning doesn't really imply much of anything. Diffie and Hellman's paper didn't rigorously define trapdoor permutations, secure encryption or digital signatures, but nonetheless these notions would be very quickly folded into the cryptographic field. Coming at it from another direction, multi-party computation for years and years lacked any sort of rigorous definitions, but it too was embraced as a truly cryptographic problem. I think the real answer as to why Chom's legacy and Goldwasser and McCauley's legacy split is much more sociopolitical and in particular the framing of problems within our world, within the flagship ICR conferences is invariably scientific and technical and the framing of the problems that are being addressed in Chom's world is routinely social and political and I believe our community is much more comfortable with the one framing than the other. Now there is a community that has long lived at the nexus of cryptography and politics and that of course are the cypherpunks. The cypherpunks emerged in the 1980s and they believed that a key question of our age was whether the state and corporate interests would eviscerate liberty through electronic surveillance or if instead the people would rise up and protect themselves through the use of cryptography. It's actually cypherpunks and not cryptographers who have been the strongest advocates for the use of cryptography. For example here is Eric Hughes writing back in 1993, we must defend our privacy if we expect to have any. We must come together and create systems which allow anonymous transactions to take place. We are defending our privacy with cryptography and here's a passage from Julian Assange but we discovered something, a strange property of the physical universe in which we live. The universe believes in encryption. It is easier to encrypt information than it is to decrypt it. We saw that we could use this strange property to create the laws of a new world. And then finally here's Edward Snowden. I'm not sure if he's a cypherpunk but here at least he's perfectly reflecting cypherpunk discourse in which he writes, echoing Jefferson in words from history, let us speak no more of faith in man but bind him down from mischief by the chains of cryptography. You know when I first started to encounter discourse like this I felt kind of uncomfortable. You know for one thing it's not the way cryptographers talk and for another it seemed like they were kind of over promising that you know there were all these obstacles to the real use of cryptography. There was malware and side channels and subversion and flaky hard software and so on but what I've come to appreciate is that top cypherpunks understand these limitations very well. They actually know a lot more about building systems than I do and yet they believe that despite these limitations cryptography can still be formative in re-architecting the politics of our world. And yet some of the cypherpunk discourse seems to implicitly assume that cryptography is going to favor the weak and I want to make clear that that's not necessarily true. I'd like to look at a few examples. Let's start off with conventional encryption. One reason, you might think that conventional encryption necessarily empowers the weak but it very much depends on how the encryption is used. The encryption has to be architected into some system and if this system is a content provider, for example, flowing a film in such a manner that it can only be decrypted within a software or hardware boundary that the user has no realistic access to it, then you have not empowered the user or customer here, you've empowered the content provider. And similarly, if you architect a cryptographic system in which the NSA has access to escrowed keys, then again you haven't empowered the individuals, you've empowered authority. So you could just throw up your hands and say well even for something like encryption, it all depends on the subsequent architecture. And yet I don't think this is the right view. I think the cryptography, the particular cryptographic problems do have tendencies and the tendency for encryption I do believe is to support the empowering of ordinary people. After all, encryption directly supports freedom of speech. It doesn't require expensive or difficult to obtain resources. It's enabled by a thing that's easily shared. An individual can, at least in principle, refuse to use backdoor technology. And even the customary language of encryption imagines this world in which ordinary people are afforded the privilege of secure communication. The aliases and bobs of the world are to be so privileged. And coming at it from the other direction, when we do try to use encryption as a way to prop up authority, we encounter lots of architectural problems. You know, clipper chip failed quite miserably and trusted computing didn't fare much better. In the end, I think conventional encryption does have this tendency to support the weak. Let's look at identity-based encryption, IBE. The aim here is to allow a party to use an email address, for example, as a public key. Seems a wonderful convenience. But what is often under-emphasized is that this convenience is achieved through a radical change in the trust model. A user's secret key is no longer self-issued. It's issued by a trusted authority. IBE embeds key escrow, right? Indeed, it embeds a particularly strong form of key escrow where the escrowing authority is able to obtain not only all keys presently issued, but all to be issued in the future. And even if you do trust that trusted authority, a state-level adversary now has an extremely attractive target to subvert, right? Descriptions of IBE rarely emphasize this change in trust model, and the trusted authority never seems to be called something like that, right? It's usually the PKG or public key generator. This sounds more like an algorithm than an entity. And in papers, even that entity vanishes further into the background, because in the formulation, an IBE scheme will be a tuple of algorithms, and the entities that conceptually live behind those algorithms play no role in any formal theory. Finally, let's think about fully homomorphic encryption. In brief, fully homomorphic encryption allows you to outsource your data encrypted under a public key that you own to a service provider who can then compute whatever you ask of it, returning to the decrypted answer, and it doesn't have any idea what it's computed, but you decrypt and learn it. From a political perspective, this sounds utopian, right? This sounds great. You're disempowering the powerful entity and avoiding this kind of Faustian bargain that underlies cloud computing. But I would say this analysis is completely specious, because it's quite speculative if FHE will ever evolve into something practically useful. And if you want to assess the political leanings of something that is really so speculative, you shouldn't just assume that it'll give rise to the touted applications. You should focus, I think, on what it does to us in the here and now. And then the story looks quite different. I would say that FHE has produced plenty of excitement, but nothing of positive value to privacy. And in media interviews and talks, leading theorists and program managers talk about the game-changing nature of this mathematics, but nobody seems to emphasize just how speculative it is or emphasize our vanishing privacy or our lousy computer security. And I think this has consequences. You know, it misleads the public into where exactly we stand right now, and it shifts financial resources away from areas more likely to have social utility. And it encourages bright young researchers to work in fundamentally impractical directions. And perhaps worst of all, it provides cover to the strongest opponents of privacy, namely intelligence agencies, who can say how they are working hard to create a more secure world while actually nothing threatening is being done towards the antithetical interests. In the end, I think it helps keep harmless academics harmless. Of course, none of this matters if mass surveillance isn't really a threat to us, if there's nothing we can really do about it and I guess I lived my life kind of implicitly assuming this until that summer of 2013 when somehow all of this stuff really started to capture my attention. And I would read the news stories and the underlying primary documents quite religiously, producing I think much more stress than insight. And it was very complicated. A year into the revelations, ProPublica and the ACLU produced this lovely chart of the 54 programs and revelations that they thought were most important to have come out in the prior year. I hope that helps clarify everything that's actually going on. So it doesn't, of course, right? I mean, there's too much here and when you scratch the surface you actually still don't know much because the details of these programs remain quite obscure. And what I finally realized is that this complexity, it is itself an application of tradecraft. The combination of extreme complexity and extreme secrecy is this really toxic mix that keeps us from making good progress or even intelligently criticizing what's going on. And I really think we don't understand what's going on and that that is perhaps the primary insight gained by this mountain of Snowden revelations. And when I say we don't understand I mean it at a quite basic level. You know, I pick up the phone and call me here. I have no idea how many copies of this communication are kept or where they're kept. I have no idea what sort of data analytics are being performed on it now or in the future, right? Because this may be maintained and mined years hence. I have no idea what other pieces of information our call will be combined with. And I don't know if this is going to somehow trigger a human analyst at some point, what I say, or a tax audit or some Hoover-style dirty tricks. Really it's a mystery. It's made the telephone a frightful object. I don't have a cell phone. I think that there's another problem and that's that when we think about where we stand we often fall victim to this extraordinarily effective framing by law enforcement on what the underlying problem is. So the law enforcement framing says that privacy is a personal good. That it's about your desire to control the personal information about you and that security on the other hand is this collective good. It's about living in a safe and secure world and that unfortunately these two things live in conflict and that we have to find the right balance. And that modern technology has been a boon to one side, to the privacy side at the expense of the security side and because of this bad guys are going to win. The bad guys, they're very bad. They're terrorists and they're murderers and they're very launderers and they're child pornographers and we now run the risk of going dark where our world will essentially be like locked closets everywhere. It's a beautifully crafted public relations campaign that works very well with underlying human fears, right? Implicit in this is fear of crime, fear of losing our parents protection and even fear of the dark. And there's a completely different framing of course, I'll call this the surveillance studies framing not doing justice of course to the enormous variety of different views within the surveillance studies but among the commonly heard tenants are that surveillance is an instrument of power. It's an apparatus of control and that power in particular doesn't have to be in your face to be effective. Often the most useful forms of power are exercised quite subtly. Also that while surveillance is nothing new, technological changes have given governments and corporations an unprecedented capacity to monitor everyone and furthermore the marginal cost of monitoring just one more person has gone to nearly zero. Governmental surveillance is strongly tied to cyber war and to conventional war and that the agencies and individuals in charge of one at least in the United States are usually in charge of the other. That law enforcement framing is wrong when it pits privacy as a personal good and not a social good because privacy routinely is also a social good and that furthermore it goes wrong when viewing these two things as in conflict when at least as often privacy and security support one another quite well. That mass that mass surveillance in particular tends to produce people that are kind of conformant fearful and ultimately boring and at a sociological level it stifles dissent. And finally that surveillance is something that's going to be very hard to stop because of the confluence of interests here and that our field cryptography offers at least a little bit of help. I'd like to comment that from my reading of the literature excessive surveillance routinely perhaps inevitably becomes political surveillance. I'm including here a copy of the suicide letter famously produced by the FBI trying to encourage Barton Luther King to commit suicide accompanied by audio tapes of extramarital affairs of his and some student activists during the 1960s U.S. universities were thoroughly infiltrated with informants who reported to their FBI handlers. More recently surveillance has become a tool for assassinations for imprisoning dissidents and surveillance combined and part of what sometimes called Miami model policing has become important in ensuring that that protests are a very intimidating thing to go to nowadays. You will be photographed your cell phones monitored your car license monitored and then you'll be put in a cage tear gas and rubber bullets fired at you. And yet I don't want to suggest that all of these intellectual reasons are in some sense what really undergirds are my disdain of mass surveillance. Bruce Schneier in his typically pithy way says animals don't like to be surveilled because it makes them feel like prey while it makes the surveyor feel like and act like a predator. I think at an instinctual level we know that constant monitoring is something that is not consistent with the desirable end of the human condition. So what can we actually do about it? Well I think one useful way of conceptualizing it was provided by Arvin Narianan's taxonomy of I guess it's really applied cryptography. Where he viewed cryptography as being partitioned into crypto for security and crypto for privacy. Crypto for security is cryptography that benefits commercial interests. The benefits commerce. It's the kind of cryptography that's in GSM phones or SSL. Cryptography for privacy, crypto for privacy is the kind of crypto that intends to have social or political ends. It's the sort of crypto that's in Tor or Signal. Narianna suggests that crypto for security has done great. That it actually has worked well to secure our world to an adequate extent that commerce is thriving. Whereas crypto for privacy has been really a failure and I think he's right about that. I'd like to enhance his taxonomy just a bit though by saying that most of what we do isn't really well characterized as crypto for security or crypto for privacy. It's kind of crypto for crypto. By this I mean that it advances the, it continues the program worked out by cryptographers but has no obvious it's not clear that it will eventually help crypto for security or crypto for privacy. At some level I believe that every field eventually becomes a little bit self inward looking and that some degree of this is actually necessary but I think in cryptography if you become excessively inward looking well this actually starves out an important social need which in this case is crypto for privacy. I think crypto for crypto has blossomed to such an extent and we're a very small community that there's not a whole lot left for privacy. So I wanted to give you a couple of examples of problems I'm running later than my notes say I should be running so I'll be very brief. This is a problem still very formative that I've been thinking about and here I'm just trying to give you a few examples of crypto for privacy problems ones that you won't have known of because they're new. So here Alice wants to send a message to Bob an email let's say but Big Brother is watching all of the communications. So how can you effectively do this and it's very difficult to do nowadays in fact even the mixed master kind of cypherpunk creations are no longer operational. So here's a suggestion for a high level architecture. The Alice drops her encrypted mail to this untrusted server X that's how she sends a mail to Bob and when Bob is ready to receive his mail well I skipped a step so the server X just retains what's given to it of course when the server is ready to receive his mail he makes a request to the server based on his secret key. But this doesn't reveal his identity in any way the server responds with some value s from which Bob should be able to recover his request depended on a value i should be able to recover the message m if m happened to be the ith message that was addressed to him and if it's not to recover an indication that there was no such ith message this is the goal if you had no concerns for efficiency it would be possible to solve by providing to be the entire database one would like to do better than that and I've been working on a practice-oriented provable security treatment for this problem and protocols that hopefully will do better here is another problem that I like in kind of crypto for privacy and it attempts to deal with APT's advanced persistent threats that might be sitting on your system is a nice explanation of the problem that I think I'll skip reading from panel discussion at RSA a couple of years ago so we want to create keys that are enormously long so that if an adversary is trying to exfiltrate them it'll have its work cut out for it nonetheless we want to make sure that these long keys don't make impractical the cryptography that you're basing on them we need fast operations that deal with long keys and what Bellari, Cain and myself have found is that there is really a fascinating information theoretic problem that underlies this you have this enormously long key K and you let the adversary learn fewer bits about it a terabyte of key I let the adversary learn half a terabyte of information computed in any way it likes about it at that point I point to P randomly chosen positions on the key and ask the adversary to predict all of those values what's the best the adversary can do and we've been able to analyze this quite precisely and the approximate answer if the adversary could learn half the keys is this that it scales inverse exponentially with a strange constant that depends on the fraction of leakage and the binary entropy function and there's lots more interesting crypto for privacy problems and I've listed a few of them here that I really like you know going beyond the the most obvious examples of mix and add and tour in bitcoin my first imperative then that I'd like to suggest is that you should really attend to problems social values at least some of the time and do this kind of style of anti-surveillance research this is a good activity for cryptography and that it won't lead to stuff that's necessarily boring that there's really fascinating problems that have social utility as one of their consequences I'd also like to say that in speaking with cryptographers about why they're working on what they're working on I often hear answers that are extraordinarily unconvincing you know that people are doing are working on the problem they're working on because ultimately they know how to do this kind of work and it gets published and people have done this sort of work before and you know these are bad reasons to be spending your life on something so I'd encourage you to really be introspective about your problem selection and introspection takes time and we seem to have a culture in which there's never enough time I think it's okay for people to be writing fewer papers I think as a community we should be producing a lot fewer papers and trying to produce papers of more relevance and particularly some papers of significant social relevance another suggestion I'd like to make is that it's not only what problems you consider but somehow how you approach them and I think the approach that Mihir and I have been practicing and advocating for a very long time what we call practice oriented provable security is a promising approach for dealing with anti-surveillance technologies and I won't try and justify that precisely the two problems that I just described as recent work are both done in this framework and I'd like to suggest applying practice oriented provable security not just to crypto for security in Arvind's language but for crypto for privacy there are a lot of aspects to practice oriented provable security and it's a completely different talk to try and explain them but I think I'd like to mention a little bit on this last one this kind of condemnatory attitude I routinely feel towards non-standard models there's really been a quite extraordinary disciplinary narrowing of our field whole areas of inquiry things like traffic analysis and information hiding and symbolic models and logical models and connections to programming languages and many more areas have been kind of pushed to the margin not really considered a core part of cryptography this is entirely a social construct these things are cryptographic when one thinks about what cryptography is supposed to entail and I would like to encourage people to be to be open about other models and other ways of looking at problems sometimes it verges even on silliness like when people won't use the word proof for a proof in the random oracle model you know all proofs in cryptography are proofs with respect to some particular model and definition and those models and definitions should always be viewed as suspect not just in that case but in all cases where we give definitions in cryptography and we should understand that models are very much and definitions are very much a dialectical inquiry I like this quote of George Box he's a famous statistician he says all models are useful all models are wrong but some are useful and in cryptography I'm afraid it's very hard to ascertain utility maybe even harder than in statistics because the definitional enterprise in cryptography kind of sits at the juncture of math and aesthetics and philosophy and culture and the artifacts that are eventually produced and in my view so situated dogma is disease I want to talk a little bit about military funding to cryptography and I apologize that this and the next couple of slides are very much from a US perspective it's hard to ascertain the percentage of cryptographic work nowadays which is funded by the military I can't find any sort of composite numbers but it certainly seems to be escalating this chart shows the percentage of papers at crypto that acknowledge US funding that include among that US funding DOD funding and in the period from like 2000 to 2010 it's under 15% on average and in the period from 2012 to 2015 it's over 65% so there's been this huge increase it seems in the fraction of papers that are getting or acknowledging DOD funding and DOD grants tend to be much larger than NSF grants so my expectation is that most funding in the United States nowadays actually is coming from the military and I think this is actually inherently corrupting and in ways that people don't want to acknowledge or talk about a lot of people think that they can take their money from anybody and it won't affect them because they're better than that and I think this is a very naive view of things our sponsors change our values in ways that we don't necessarily see and they also reflect our values and the values of the military funding agencies well they're definitely not my own values here DARPA I think is the largest DOD source of funding in the United States almost certainly and here's DARPA's mission these agencies don't hide their purpose DARPA's mission is to invest in the breakthrough technologies that can create the next generation of U.S. national security capabilities they also speak often having been born following Sputnik of avoiding technological surprise and in creating surprise for America's enemies I would like to suggest that if the institutional values of those that are funding you are fundamentally at odds with your own values then you probably shouldn't be taking their money funding in cryptography as it is throughout the sciences is used to redirect it in the directions that power wants and it would seem that the directions that the NSA likes and NSA advises all military agencies in the United States at least about cryptographic funding the direction that the NSA seems to prefer is to have not very youthful work again to keep harmless cryptographers I'm sure many of you have seen these quotes from the lovely EuroCrip92 trip report that was released under a Freedom of Information Act inquiry I'll read one or two three of the four last sessions were of no value whatever and indeed there was almost nothing at EuroCrip to interest us this is good news there were no proposals of crypto systems no novel cryptanalysis of old designs even very little on hardware design I really don't see how things could have been better for our purposes my own experience with the NSA was that when I when I was to receive my career award back in 1994 the NSA apparently tried to kill it the NSF program director felt kind of offended by what she viewed as this inappropriate request and not only said no but picked up the phone to tell me I think we should all strive to be doing work that the NSA would like to kill my conclusions are to think twice and then maybe one more time about accepting military funding make sure it's actually consistent with your values and more than that to regard ordinary people as those whose needs you ultimately aim to satisfy with your work most of us I think are trying to satisfy one another yes this is kind of the paradigm in crypto for crypto and beyond that I think we've often internalized that we'd like to make the world somehow a safer place for electronic commerce or for other commercial interests these aren't the only values there's this long tradition in cryptography of cuteness we often spin kind of fanciful tales to explain the cryptographic problems we've imagined my personal phrase usually involves space aliens and in slide presentations it's routine to depict our adversaries like this little fellow a cute devil with horns and maybe a pitchfork and a tail I've never liked this but I think after the Snowden revelations it really started to vex me in a new way crypto is actually quite hard hard to understand at least for me and I think that when we try to use these kind of cutesy approaches to explaining our results we don't make it easier to understand we actually just add this kind of extra layer of obfuscation and we kind of send this implicit message that I'm so smart that I don't have to even think about actual human concerns my problems are entirely are entirely made up and it's worse than that because I think that this cartoon heavy cryptography kind of reshapes our internal vision of who the adversary is if we're thinking about our adversary as a cute fellow with a pitchfork and if we're thinking of our adversary as a $53 billion military intelligence complex we will see the world in a very different way and the kind of problems we will come up with they will be very different problems so I'd like to suggest that we should stop with the cutesy pictures and we should take our adversaries really quite seriously and I think we should in particular try to figure out what research is going to frustrate adversaries like the NSA and GCHQ and we should do some of it I have several more suggestions you know many of us are academics and at least post tenure we're supposed to be able to do whatever we damn please this is not frequently exercised and in fact throughout most of the sciences it seems that it's not that academic freedom is not maybe even very necessary or important but here we are in an area you know anti-surveillance research if you want to think of it that way for which academic freedom I think is actually useful and academic freedom if not exercised is going to wither and die it's already very much in decline I think we do a social good when we exercise our academic freedom here you know I remember reading this paper by Dan Bonet and his students the most dangerous code in the world how many of you have seen it in which he describes this kind of universe of middleware that that can subvert you know the verification of cryptographic certificates that most cryptographers and security people assumed to be present and I remember feeling this sense that that there existed this big piece of stuff out there that I didn't even know the existence of that was really highly relevant to our communications ecosystem as a whole when the Snowden revelations began to come out and people started to speculate about how they might be subverting this or that it occurred to me that to a large extent power probably doesn't need clever cryptanalysis or even social engineering they just need to get a good systems level view about what's actually going on in the real communication architecture and then just break the things that are obviously wrong I'd like to encourage people especially young people who are still highly plastic to try to get this systems level view which I understand is not easy to get but which I'm sure can lead to much more relevant cryptographic work I think we should be using privacy tools very few of us do you know I think this is an instance of the calling the kettle black I never use PGP I just recently started using Tor and so on but I think we pay a significant price for not ourselves being users of this kind of privacy enhancing technology if we were forced to endure the inconvenience of these difficult to use tools we would be more strongly motivated to try to weave them into the communication infrastructure in a way that dumb folks like us would find it painless and I'll say that you know the first problem I described that server mediated electronic mail problem I thought about that problem in days of starting to use a system called pond that Adam Langley that Adam Langley invented I think a lot of ideas will spring from simply becoming users of this technology one colleague told me that cryptographers failure to use privacy enhancing tools was to him like discovering your doctor smoked two packs of cigarettes a day and with an intravenous drug user on top of that and so on I think it's not a bad analogy a lot of us like to think of the internet as some kind of wonderful cryptographic as some kind of wonderful commons it's not really a commons at all there are specific things on the internet like Wikipedia and Creative Commons and the free software movement but most of us most of the time are using on the internet services provided by a very small number of very powerful companies I think we should be doing our best to be replicating these services in a more secure way in a way that's out of reach of power and there will be important cryptographic problems in trying to achieve this end emphasizing that prerequisite I think we should be trying to create amongst ourselves some kind of useful cryptographic commons the cypherpunks were already advocating this decades ago in their creation of anonymous remailers for example that was very much their program I think we could start small Wikipedia is a commons that we all employ maybe it could become a routine part of IACR conferences and workshops and doctoral meetings to spend an afternoon or an evening gathered around and hacking on Wikipedia pages so that cryptography as reflected in Wikipedia is really state of the art and beautifully captured I have a bunch of conclusions to give you and no time I have one minute which I'll overuse time is a flexible thing my view is that cryptographers are kind of twice culpable for the surveillance morass in which we find ourselves first it's computer scientists that really created this enormous communications infrastructure and then that turned it into this amazing tool for for surveilling us all and on the other hand cryptography offers at least some set of tools and hope to turning around this tragic event I don't think I'm an alarmist with respect to you know bizarre dystopias that filmmakers and novelists like to put forward I don't worry about nanotechnology turning our biosphere into grey goo I never worry about sentient robots coming and deciding that man is a pet at best but I think here this is a very realistic dystopia the kind of creeping surveillance that grows organically in the public and private sectors that becomes increasingly comprehensive and entwined and predictive that becomes an instrument for assassinations and political control and the maintenance of power this kind of vision doesn't just seem possible it seems to actually be happening before our eyes if it's not already here I'm not terribly optimistic I think there are tremendous forces aligned to make sure that we don't make much progress in these directions and yet I think there are some reasons for hope cryptographic papers inspired by the Snowden revelations that are starting to appear apace there are several people in this room that have been going around giving talks about post-Snowden cryptography and panel discussions on this matter and I think that we are starting to feel connected to this problem in ways that we didn't prior to 2013 and the cypherpunks are still very much on the stage and creating anti-surveillance technologies what's app for example now has a billion users of it that's a lot of apparently good cryptography in a lot of people's hands the cypherpunks are sometimes described as doing crypto with an attitude and I think that makes some of us uncomfortable because it might not quite be our day-to-day attitude but more than anything else the cypherpunks have wanted is crypto with values and I think crypto with values is what we as the cryptographic community are most sorely in need of now finally end with this there's this quote often misattributed to Pericles that says that just because you don't take an interest in politics doesn't mean that politics won't take an interest in you and as a cryptographer we can ignore the political and moral dimensions of our field but it won't make them go away all it'll actually do is to make your own work less relevant and less sociologically connected my hope is that a bunch of you, especially young people will recognize this as a starting point in your work and work to develop an ethically driven vision of what you actually want to accomplish with your cryptographic work right, thank you very much for that inspiring call to arms and I think there'll be some questions I'll pass it on to Eddy thanks so much for talking unfortunately I disagree with much of what you are saying I'll take one example only and this is your assumption that we can predict what will happen to our new invention in the future and I'll just give you two examples you mentioned that my invention in the mid 80's of identity based cryptography had an element of key escrow which is bad I absolutely agree that it has an element of key escrow the question is should I have refrained from publishing this idea after it occurred to me and you have to remember that in 2001 it led Bonnet and Franklin to create pairing based cryptography as a solution to the problem and this was used later to do lots of wonderful privacy enhancing stuff so while you may criticize the original idea it's not going in the political direction you are interested it was totally impossible to predict what will happen to it in the future so self-censorship or your last suggestion that every paper we publish should be tagged upon invention as being good or bad in principle and based on it you should publish or not publish it is very problematic my second example which goes the other direction is bitcoin bitcoin was invented as a wonderful decentralized system going exactly the way you wanted on the other hand if you look at how bitcoin is practiced today it's totally centralized there is a tiny number of large mining bitcoins it's used to do all the ransomware I can go on and on about all the bad things that happened I personally believe it's going to be totally wrong for each one of us to tag his ideas his research ideas as being born good or born bad and based on it decide whether to publish or not to publish it so I appreciate that comment and that concern I struggled with it much myself it is very difficult to ascertain the direction of science and technology and I think that many of us use this as a reason to not take close attention to where we think things will head and I am not suggesting that people self-censor their ideas because they think that the political leanings are undesirable but I think the simple act of keeping strongly in mind where you want things to go will tend as a whole to move us in this direction not to each individual piece of work but somehow as a whole if our community cares about the social contribution of what we are doing I think as a whole we will move in that direction and that's the most I can say about it I would not minimize the difficulty in trying to be predictive about where our most well-meaning ideas might end up or where our most concerning ideas might evolve Thanks Phil for a very thought provoking talk I enjoyed it very much and I think a lot of us here did could I ask you your opinion of the Internet of Things and the opportunities for personal privacy in terms of the future of the Internet of Things and 20 years time for instance Certainly. You know I like this phrase the Internet of Creepy Things I think we are continually promised technological advances which are supposed to make life better and the Internet of Things is among them what a wonderful world we will live in when our toaster knows us and our refrigerator can make informed decisions about about our eating habits and so on I myself have never understood how this vision of a technological future improves man's lot and yet we seem to head in this kind of direction anyway I would like to see people be quite skeptical about advances not only in the direction of Internet of Things but data mining and a host of technologies that we are closely connected to as computer scientists that I don't really believe are likely to contribute to a more positive future for mankind so the Internet of Things certainly has with it many kind of security concerns but it also carries with it the potential of just making life an unpleasant space in which to be living. So I think sometimes we do work on stuff that the NSA are not interested in but they are not interested in it because it's not in their mission so if I go back to the 92 cryptologue it says reading material to avoid listening to some talks and this was very good because there was a three talks which were three more snoozers he talks about and the three more snoozers is in a session on digital signatures and electronics cash so I think the point is that they didn't actually realise that these would be that they were ignoring the talks that were actually nothing to do with encryption which is more to do with the kind of things that you want them to talk about we do work in the areas that the NSA don't want and they ignore us so I mean Sahai told me yesterday that he thought the cryptologue article evidenced large blind spots that the NSA effectively missed I don't know you know at some level I thought that that cryptologue article pegged our community to a T and that that's what really made us uncomfortable and yet I think it is also the case that intelligence agencies too have their own very particular way of looking at the universe that does no doubt give them substantial blind spots my suspicion for example is that they really understand nothing of provable security and that does cause them to make errors that they wouldn't otherwise make I remember following the appearance of GCM and OCB and IAPM all of these methods for authenticated encryption the NSA threw in their own contribution for an integrated authenticated encryption dual something mode I forget it took me about an hour I think to break it and I'm a lousy crypt analyst I would NSA representatives produce this proposal which they proudly said they had been working on for a couple of years to show that they hadn't really been undercut by us academics that was so flagrantly wrong but it was only flagrantly wrong if you had internalized a definition of what authenticated encryption was supposed to do and I think the contribution evidenced that these authors maybe the NSA these authors really had no definitional understanding of the problem they were attempting to solve and that they probably pay a price for that it's impossible to predict the future evolution of various types of research so we're way over time for a coffee so I think we'll have one very more quick question and a quick answer please quick question I'm under the impression that crypto research today is a very large part of it funded by the government maybe not military but even the university funding often comes from the government and all it's funded by large companies so do you think if the research moves in the right direction you talk about this funding may dry up it has been a problem across academia that we have relied more and more on government funding and in the United States we've honestly warned that as academic institutions become more and more drinking, eating from this troth of academic of government funding that they would be corrupted and you can say the same of corporate funding I think in the ideal world academic institutions would have ample money that was really quite unencumbered and in the real world we live in this doesn't seem to be the model at all and even less and less so as time goes on and I think it's a big concern that all of us academics should be unhappy about okay so I think I'm going to go for a 20 minute coffee break so we'll come back here at 5.35 but I'd like to thank Phil thank you