 Thank you all just so very, very much for making the time to come today for the extremely kind remarks. Mike, Steve, it's such a thrill. And let me say also thank you to my wife, Roya, who I just couldn't do any of it without you, my darling love. And let me also thank and just say how touching it was to hear from secure and from Andrew. Hearing hearing both of you Andrew was was such a pivotal advisor for me at the very beginning of my research career and secure has just gone on to be the advisor to so many wonderful people himself. It really makes me think about what a continuity we have within this academic enterprise and how, you know, the really the best good, the greatest good that we can do is the good we do through other people. I also want to say that this is a little bit intimidating because these are such big shoes to fill stepping into this professorship that was previously held by much who is a both a fabulous colleague and a person I've long looked up to. And I'll do my best not to to mess it up much. And also let me thank the Brent family for what they've done for this university and it's an honor to have your family's name associated with with myself and I'll all again I'll try not to mess it up. So let me share my screen, they've asked me to say something remotely profound and all I'll do my best. So let me let me try to put a little bit of what I've been attempting to do through my my science in context, and try to encourage others, especially the students here to think about the world in a certain way and to think about certain problems that that badly need or help. So first I want to just direct you all to think a little bit about what the scientific enterprise involves we think really broadly about what we do as scientists. Like, there, there are really four things that we spend our time doing knowledge generation, education and tech transfer or things that we do a lot around the university. But one more thing that science does is infrastructure creation and by infrastructure, I mean that the physical and organizational structures that are foundational to the operation of a society. Now in computer science. We don't really spend that much time on infrastructure creation. There is one big piece of infrastructure that we can claim credit for the internet one of the greatest pieces of technological infrastructure that humanity has ever devised. But I would argue that we should be spending more time on this last branch of the scientific enterprise and thinking more about the ways that we build and shore up the infrastructure of our society. Now in this talk I want to focus on one kind of that infrastructure and I'm going to call this the infrastructure of democracy. These are the technologies foundational to the operation of a liberal democracy like ours. And there are two key ways I want to highlight that technology is pivotal to the operation of a modern liberal democracy. And those are the technologies that power or secure and private communication the modern day post office and the technology that powers and secures our elections themselves the way we allocate power and decide who will lead. Computerization has created potentially existential threats to both of these kinds of infrastructure through things like cyber warfare and mass surveillance disinformation. But we as computer scientists as researchers have the opportunity to help address these problems. And I'm going to point to two ways that we can do that by building new infrastructure for security, and by working to bridge science and public policy. All right, so first let me talk about work that I did to build infrastructure to encrypt the web and I mean encrypt the entire web. So in in past work I and many many other researchers have highlighted lots of different problems with the cryptography that we use to secure online communications things like the TLS protocol that underlies every HTTPS website you visit. And in in that work we found all sorts of different problems problems with random number generators problems with legacy cryptographic protocols problems that were in some cases the result of earlier attempts decades before to build our cryptography so that the government could break it for surveillance purposes. But for all of this work. The most common vulnerability in the HTTPS protocol and its deployment was, well, what do you want to guess that vulnerability was plain text. Just seven years ago, how about half of all connections from web browsers to servers were just sent completely unencrypted over internet backbones. It was just about 2%, but that exposed these connections to mass surveillance to censorship to consumer profiling by advertising companies to attackers even modifying that data to attack and harm third parties. It's hard to believe, sitting here in 2023 that as recently as that half of the web was not encrypted but why was that well deploying encryption on your website was hard. You had to realize you needed it, you needed to go through a long and complicated technical process to get it. And that process involved going to some third party company called a certificate authority to have them vouch for your identity to web browsers that visited your site. But it was a huge complicated manual process. And, you know, as soon as you finally figured it out turned on that like lock icon on your site. Well, a year later your site would start looking like this because your certificate from that certificate authority would expire. And just in time for you to have forgotten how you got it in the first place. Today, things are quite a bit different from that. And that's because, well, my collaborators and I, we realized and we knew this from day one that there was a solution to this problem and that solution was form of extreme automation. We wanted to make it so that certificates could be automatically obtained in just a few seconds with no user interaction and it no cost to sites. My collaborators at the Electronic Frontiers Foundation, Mozilla, and my students here at Michigan got together and started a process that took many more years than we thought it would to build a new kind of certificate authority that could encrypt the entire web. Doing that was not easy. This was in some ways when we eventually wrote the paper and we only got one paper out of this. And it was one of the longest gestations of any research I've ever done. But we really looked at it. We had to explain to the reviewers, well, why is this, you've already succeeded. Why is this something worthy of publication? I said, look at it as an experiment. When we started this, if we had told you that this, if we had just proposed this system in a paper, you would have laughed it out of the conference because there was no evidence it could ever work. So we had to build it and prove that it could work and use it as an experiment. But building it required, well, creating a lot of different technical artifacts, pieces of infrastructure. We had to found and fund a nonprofit that could have people there all the time to run this thing. We had to build a protocol and take it through the IETF process as an RFC so that we could have a standard and open way to acquire certificates. We had to build new infrastructure grade software for the certificate authority itself, and we had to build software for clients to help them obtain certificates through this protocol. So all of these different pieces we had to build. But the impact of that has been intensely gratifying. A few days ago, Let's Encrypt issued its four billionth certificate. And today we serve more than 363 million domains. As Mike mentioned, those include Wikipedia and the White House, but also they include NSA.gov. So sometimes I wish I had a dollar for every one of these certificates. And I'm content with thinking just perhaps that every one of those certificates has generated maybe a dollar in value for society, even if we haven't been able to capture that, just as a rough approximate measure of the good we perhaps have been able to do. And if we look at how at use of HTTPS has changed on the web since we started it's gone from about 40% in the day we launched to about 98% today. Now there have been many other contributing factors besides Let's Encrypt, but we hope we've been responsible for a part of this. Today we are the largest certificate authority in the world by far to the point that as a director I have to worry about antitrust. But I'm pleased that the, I believe, fourth largest certificate authority in the world, James can correct me if I'm wrong, is Google Trust Services, which was built and operated by my former PhD student on this project James Casten. So again, the greatest good we can do ultimately is going to come through other people. Now one thing that a lot of people don't realize about Let's Encrypt is that the gestation the real reason for encrypting the whole web was important for the direct good that it did but also was to make another one of my projects building infrastructure more likely to succeed. And that's the refraction networking project, which aims to combat global censorship by making it by essentially rewiring the internet. So that sites are more difficult to censor. And the way this works is we put boxes at ISPs in friendly countries and make it so that make it so any HTTPS connection from someone in a censored country that passes through one of these networks can be picked up by a box in the middle of the network and converted into a connection to a proxy server. This technology thanks to the ubiquitous encryption on the web is able to more easily pass under the radar of sensors. And today we're operating it at scale, and it's the client software is in the hands of millions of people in highly censored regions. Encrypt has been working on a number of new applications to and I think this give you an idea of other kinds of research of infrastructure that we can build for security. One of these projects divvy up aims to provide privacy preserving analytics as a service. Another pro semo aims to rewrite the core pieces of the core network facing pieces of the software stack and memory safe languages. And we need to build more security infrastructure and I encourage the students here to think about new problems that fit in this in this vein. Now I'd be remiss if I didn't also talk about another kind of infrastructure that I think we can be doing a lot more in this computer science community to secure. And that's the infrastructure the highly computerized infrastructure that underlies modern elections. There is two broad classes of security risks. One of them we saw in 2016 when Russia targeted in cyber attacks election infrastructure across all 50 states, which we know happened because of the investigations of the bipartisan senate intelligence committee and the special council in 2020 we saw another class of attack false claims of fraud, which were purpose which were made by the president himself, who claimed that the election result had been hacked against him in an attempt to retain power. Both real attacks and false claims that attacks happened or things that elections must be engineered to resist. That's part of what makes it such a difficult problem. I'll mention briefly, my own role in resisting some of those false claims. So right here in Michigan I don't know how many people know this but right here in Michigan and on election night 2020. We were, I think the only large jurisdiction in the country where an entire county reported actually did report the wrong winner on election night. So, Antrim County, Michigan. In its election night results report it's a very red county in upstate Michigan, and yet it reported that president that that that Joe Biden had won by thousands of votes, and this is a result that was absolutely impossible and everyone knew it from the first place. And the state very quickly corrected this and reported that it was the result of human error. But that was too late to stop there from being a lawsuit by people connected to the Trump reelection campaign, which obtained from a judge the right to do a forensic investigation of the computer systems involved in the election. And within just a couple of days came out with this incredible expert report, declaring that there had been massive fraud as a result of rigged dominion voting machines. Now, this is in just in December of 2020, even before January 6. There were calls for President Trump to do something. Now we know only in retrospect, more specifically what do something would have meant the Congressional Commission that investigated the events of January 6 turned up this draft executive order that was never issued. What would have authorized the Department of Defense to seize voting machines across the country, citing as its reason, this bogus expert report from Antrim County, Michigan. So that's what's at stake. In the in the immediate aftermath of those events, the way that Michigan responded was that our Secretary of State and Attorney General commissioned me to do an independent forensic investigation and respond to these claims. And unfortunately, it's much harder to do real science than to just make things up. It took a couple of months to do a thorough did investigation of all the digital artifacts and really determine what had happened to a high to a high standard. But I did. And what I was able to show was that the reason for the errors in Antrim County was that election officials had made a last minute change to the ballot had updated their central aggregation computers but it neglected through human error to update some of their voting machines with that new ballot design as a result in much of the county. President Biden's votes were thrown away. Biden got Trump's votes and Trump got the libertarian candidates votes, and so on down the ballot and you can reconstruct these errors exactly there's no doubt whatsoever that this is what happened. The investigation also showed that there were still some down ballot races in which other errors that the election officials had made cause the results to still be wrong, including one contest, the Central Lake Village Marijuana retailer initiative that had been failed as a tie on election night, but when they went back and re scanned the ballots to try to correct the other issue, one of them had gone missing. And so the initiative passed, but I was able to reconstruct the initial ballot and show that probably this was incorrectly decided it should have been a victory so I've been to Antrim County and visited that Marijuana retailer I mean Marijuana retailer here it is so if you ever say that you know election here someone say election glitches are never going to have consequences while you're living proof. So, um, there is no evidence whatsoever that the 2020 election was hacked, but what we do know is that election infrastructure has very serious vulnerabilities in fact it's the consensus of the National Academy's that there's no realistic mechanism to fully secure vote casting and tabulation computer systems from cyber threats. So our grand challenge is to figure out how to achieve security and public trust using inherently fallible technology. It's fallible in the sense that every single time that qualified independent experts have done a security review of a US voting machine, we've found vulnerabilities that would enable vote altering attacks. And this is for a variety of deep regulatory reasons that we just don't have a very high bar for this technology it's maybe 20 years behind the status quo and if you want recent evidence of this just last year. SZA the cyber security and infrastructure security agency the federal agency that oversees election security issued its first ever security advisory about vulnerabilities and election equipment. Based on work that I did with my former PhD student Drew Springle is now a professor at Auburn. The Georgia voting machines shown here a dominion voting machine because of course it's dominion, and found, well, just awful very serious vulnerabilities, including we could completely bypass the cryptographic authentication mechanisms and remotely execute arbitrary code from the memory card as root. So, SZA said that these vulnerabilities present risks that should be mitigated as soon as possible, but Georgia's Secretary of State recently announced that it will not patch these vulnerabilities until after the 2024 presidential election is over. Because of course it won't. It will ultimately have taken longer than the US is involvement in World War two for the state of Georgia to correct these nine CBEs. Now what's the implication of that the Georgia's machines as a professor of power from Princeton mentioned have a paper trail but we say that there is probably the worst kind of paper trail. Well that's because there's a vulnerable computers sitting between the voter and this computer print out that says how you voted. And how are voters going to notice if something is wrong how likely our voters to notice if something is wrong with that print out because the machine has been hacked. People have very different intuitions about that. And so my students and I wanted to actually try to measure this. And we did a study where we set up a mock polling place here in Ann Arbor at the public library and asked more than 200 people to come and vote on real machines we had hacked so that they change one vote on each print out. 23% of people failed to notice that cheating, and that means that in a real election. If it's a close election, enough people might fail to notice the tampering for the the result to be wrong. So this is a serious problem and one more problem I want to point out also in dominion voting machines. Another real problem is that this technology can also potentially threaten voter privacy. Last year, my students and I, along with Professor Springle, discovered a vulnerability in dominion ballot scanners because of course it was dominion. So, to protect privacy these scanners assign each ballot a random looking identifier, and that gets linked to the ballots, even if jurisdictions publish things like scans of the ballots, which because of the stop the steal from the government and other things there's now a lot of pressure for jurisdictions to have that level of transparency and publish vote by vote data. My students and I discovered that these random IDs are in fact fully predictable because they're chosen by a linear congruential generator, a method known since the 1970s to be unsuitable for use in security. With public information anyone can reverse engineer the generator and unshuffle all the ballots. So you can go from results that look like this with scan of every ballot and figure out the order they were cast. Also because of the current public pressure about election security some jurisdictions publish all day video of the polling place showing exactly who has used the scanner. So, and align the photograph of the person using the scanner with exactly what was on their ballot all day long. This also won't be patched in Georgia until after the 2024 presidential election. All right, so what do we need to do to defend these things. To find out how to achieve security and public trust using valuable technology now we could do the old fashioned approach try to keep hackers out of the systems and of course we should do that. But that's never going to be enough to prove to people that the election result was right at best will have no evidence of problems. A better approach and what my research community advocates is to provide public affirmative evidence that election results are accurate. We can do this using handmark paper ballots and risk limiting audits. So risk limiting audits, a technology introduced by Philip Stark of Berkeley who I hope my friend Philip is here on the call are a way to hand count enough paper to ensure that if the outcome that was reported by the computers is wrong then the audit has a high probability of detecting the discrepancy. The National Academies is called on all states to adopt our LAs by 2028 in all those federal and statewide contests. So how are we doing getting paper and audits. Well the map has changed considerably between 2016 and today, you see a lot more green handmarked a lot more a lot less red, but you actually don't see all that much more green. So, the paperless voting machines have finally gone away and all but one state Louisiana, but we have a lot more work to do because most of the jurisdictions that have gotten rid of them have gotten those ballot marking devices, technology similar to what Georgia uses that put a vulnerable computer between the voter and their ballot. We're even farther behind unfortunately, even though it would cost only a few 10s of millions of dollars a year to audit every federal contest. Only a small number of states routinely require audits to a high level of statistical confidence after elections, and there's no way we're going to get anywhere close to the National Academies recommendation. So if we don't have auditing or if we do have auditing but want to make sure that we detect problems before voters go to the polls. That raises one more interesting research question. So how can pre election testing do a better job of detecting attacks. And this is something I'm working on right now with my PhD student Braden Crimmins and my collaborator at UI UC Brad Sturt who just had a baby hi Brad. And we're looking at ways to make pre election testing more rigorous before every election officials run ballots through every machine that have known marks and make sure that the machine gives the right total sounds like a good idea. But what kinds of problems you can spot with this depends on how that test deck is marked. We discovered that there are actually some pretty easy to carry out categories of attacks that the test decks that states use today just aren't going to catch. But moreover, if we're very clever about how we generate the test deck. And in fact if we generate that test deck by applying an optimization algorithm and techniques from operations research called integer programming. Then we can come up with a test deck that's both short enough to be used efficiently and also can detect a much broader category of potential attacks and misconfigurations. So how many ballots would you need in one of these tests well looking at Michigan's 2020 election the 2022 elections, Michigan's existing procedure which only detects a limited set of attacks required 22 and a half ballots on average. Our technique requires only 22.8 on average and detects these different problems both within in a cross contest. We're working with the state of Michigan to deploy this technology and just this month it was used in real elections for the first time in a number of Michigan jurisdictions. So to get farther we need more science, we also need more help from policymakers and Congress has an opportunity to enact key reforms like requiring paper ballots and risk limiting audits, but so far it has yet to act. So working with our policymakers, we can make sure that these key reforms get in in implemented such that evidence based methods are deployed that can show election outcomes are correct. Not merely that there's no evidence they're fraudulent. We have to make attacks more difficult by applying security testing and best practices ensure attacks are detectable by recording every vote on a very voter verified and preferentially handmark paper ballot actually conduct risk limiting audits of all major contests and normalize nonpartisan investigation of problems of the kind that I did in Antrim County. There are abundant research opportunities in elections for any students who are looking to help and elections truly need all the help they can get, not just from not just from the political pundits and activists you see on TV but also from scientists. So I want to thank everyone who came out today for this. This work would not have been possible without my colleagues my collaborators, and especially my students. So to to past and current students. Thank you just so very very much. And thank you, everyone.