 Ah, very good. Well, I just want to say a quick thanks. My students ask me, what does it mean when you get a chair? And I said, well, yesterday, as a 10-year faculty member, I was unfireable. Today, I'm unfireable and unpromotable. So thank you so much. But actually, it's just, like Jack said, being recognized by your peers and by the university is really important. I love this place. I love what I do here. I tell everybody I'm institutionalized. I can't ever leave the university ever again. Well, I could leave perhaps this one, in case you're thinking of raises and such. But I don't think I could go back to industry full time, because I really like academia. And I want to say a special thanks to Jack. The reason why I wanted to wear the name S. Jack Who was, even though Jack and I, our research never really intersected with each other, he was such an important mentor to me. I think I met you first in, like, how to get full many, many, many years ago. And you said, if you ever need any advice, come talk to me. And I went and talked to him. And he gave me great advice. And I kept talking to him. I kept talking to him. And Mike mentioned that center I ran for five years. When that center shut down, I thought, OK, now they're going to put me out to pasture. My career is over. And I went and talked to Jack. And it's not every day you can get a one-hour counseling session from the senior vice president of research at University of Michigan, but I got one. And he gave me a lot of great suggestions on where to go. And one of those was to do a startup, and that's what I'm doing now. So thank you, everyone. Thank you very much. Today I want to talk about my latest passion, which is privacy and how it affects us. And when this talk first got announced, the first thing that happened is people walked up to me and said, why privacy? Why privacy? I'm known for low power. I'm known for reliability. I've started working security. Why privacy? Well, I could tell you that it's because in 2017, there was a single data breach in a company called Equifax that revealed half of everybody in this room your financial credentials, your address, your date of birth, your social security number. Half of all Americans were revealed that day. It was a very sophisticated attack. They hacked their way into the system. The attacker sat for 76 days, slowly filtering out information as it was passing by, and then eventually were caught. Only until very recently did we not even know who did this. And in terms of restitution, if you are upset that your financial credentials were stolen, 50% probability out there, you can get $6. So go ahead and do that. I could say another risk that we suffer from privacy is companies that we perhaps trust too much are using the information we share with them against us. In 2016, Facebook sold your public profile, page likes, birthday information, current city to a company called Cambridge Analytica. In addition, if you installed their app, you also gave them all your newsfeed, all your timeline, all the messages you sent, including the information that was in them. And they used that to develop a model of what you like and what you dislike, and then targeted political ads towards you that would basically get you upset based on the actions you had taken previously. And that was generally considered bad behavior. There's still really no protections against this kind of activity. But since we're all friends, I want to tell you the real reason why I got involved in privacy. And the answer is really serendipity. And I want to tell you a story about one of my very earliest research activities and how it led me to this idea that often when you're trying to discover one thing, you discover something else that's even more interesting and valuable. And it all started back in 1978. When I was 13 years old at that time, and I worked in a restaurant. And I worked third shift. So every night, I had to clean the restaurant. And it was just us and a couple other 13-year-olds cleaning the restaurant. And then when we closed the place, there was a lot of mischief that happened afterwards. And this is the example of one piece of mischief. That day, the refrigerator, the ice maker had broke. And somebody had come in to fix the ice maker. And in the process had left a propane welder sitting on top of the ice maker. Now, perhaps I didn't mention, but as a child, I love fire. Fire was the coolest thing right up there with computers, right up there with computers. And so I got this brilliant idea. Let's put a teddy bear from the gift shop on top of a bag full of propane. And then I'm going to take a broom. I'm going to light a match on the end of that broom. And I'm going to hold it up to the bag. And the bag is going to explode. And the bear is going to fly up into the air. It's going to be the coolest thing ever. And so I sort of very slowly moved up to this bag very slowly. And I lit the corner of the bag. And you know what happened? I built a pilot light. So the pressure of the bear just made a little bit of gas release. And we sat for two minutes waiting for this thing to explode, and nothing happened. So then the wheels started turning. And I thought, oh, yeah. If I run and jump onto this bag, what's going to happen is I'm going to get this amazing flame coming out of the side of this bag. It's going to be the coolest thing ever. So of course, bad judgment. I run. I jump on the bag. And I am completely consumed in fire. The bag splits out at the bottom, both sides. And all I can remember was seeing orange. And the people I was there with said, you are in this column of flame for like a second and a half. Whoa. So that was a really unexpected result. What did I learn that day? Well, I learned you can burn hair in a second and a half. I was combing out knots out of my hair for quite some time. I learned that under specific circumstances, specific circumstances, enough pressure, enough exposure to air, you can get explosions out of propane. But probably the most important thing I learned was don't play with fire. And I didn't play with much fire after that. So the reason why I am working on privacy is really just serendipity. I was looking for one thing and I found something else. So now I want to talk to you about my journey to this new research that I'm really excited about. And you'll see all along the way a bunch of happy accidents. And when you embrace those happy accidents, you can really do some exciting, fun research. But first, let's take a brief moment and see why is it that nobody can protect my data? And then we'll see the technology that we've started to develop and how it's getting us to the place where we are today. The reason why no one can protect our data is because we protect data by building software that protects our data. We use program security to protect data security. The problem is it is impossible to build a program that cannot be hacked. I always tell students there's only two kinds of pieces of software. Software that has been hacked and software that could be hacked. There's no software that cannot be hacked. It simply doesn't exist. And let me tell you why that is. That says because making a program that cannot be hacked requires what's called an impossibility proof or an unsat proof. An unsat proof where the terms of the proof are both expressed in real world, complicated programs, and human ingenuity. These are two things that we don't know how to encode into logic. And even if we could, you'd never be able to solve that satisfiability problem. So you end up getting basically programs that protect data and all those programs can be hacked. Just as an example, let's say we shut both of those doors and then we wanted to make a proof that you could not get out of this room. We would have to be, what if Alex back there figured out that if you mix the punch with the oil and the cylinder over here on the door clasp, it create an explosive concoction that could blow through that wall. Or what if one of those giant diamond necklaces back there could be used to cut a hole in the window and jump up? Ingenuity is such that we don't know how to do that. And I want to give you just an example of how difficult this is. And this is a case study I'll call Intel versus speculation attacks, which was invented by one of our colleagues here in Michigan. But it's interesting and fun to watch and really almost kind of depressing when you think about protecting software. The story starts in 1991. There's a team in Portland, Oregon, designing the Intel Pentium Pro. And what the Intel Pentium Pro does, and it was released in 1995, is it does this thing called speculation, where it tries to figure out what part of the program are you going to run before you actually run the program. OK, a year after that chip came out, a guy named Paul Kulture invented a security attack called a timing attack. And what he showed is that if you carefully look at the timing of software, you can sometimes pull secrets out of that. In 2006, a researcher named Percival invented a way to do timing attack on caches. Caches are a very important memory structure inside most processors, including the Pentium Pro. In 2015, a guy named Pomeranov invented what's called a priming attack, where he showed that he could set the state of the cache and basically pose questions about the data that was inside of it and then reveal that using timing attacks. And then in 2017, our colleague Daniel Gankin invented what was called Spectre, which combined these three techniques to attack that 1991 processor. Now, it's ridiculous to think that these people could have anticipated all that stuff. And that is the reality of program security. We don't know how to build programs or anything that can run a program that cannot be attacked. I think another interesting question to consider is, does this affect me? And this is a great website to check out. It's called Have I Been Pound? So pound means it has my data been stolen. You can go to this website and you can type in any password you have. And I will assure you the password you type in doesn't go to the website. It does what's called a hash, sends part of the hash there, sends back everything that matches that partial hash. So I've actually read this right here. And I can assure you that if you do type one of your passwords into this, your password doesn't leave your machine. It just gives it enough information to look up if the password was stolen. So what I did is I'm a good password creator. And so what I do is I have what I call my trash password. That's the password I give all the websites I don't trust. I would never give that to Michigan or my bank, or I just give that out. And I type that in. Look, oh boy, that password's been seen 236 times in data breaches. And that's not a guessable password. That is 236 data breaches. So what this website does is it just, whenever a data breach happens and they release the information, which is very common to release that information, they just amass it. And so now Google is starting to use, and other companies are starting to use this. So in the future, you'll soon see, they'll say, this password was seen in a breach before. And they go and basically use this interface to find that out. And what you'll find is, when you try this, you'll probably be very surprised that some of your passwords have been in data breaches. In fact, every year, every company on the web has a one in four chance of being breached. It's really quite ridiculous. So let's go back to that problem of how do we create systems that cannot be hacked and cannot be penetrated? Years ago, back in 2012, I was working with Bill Arthur. He was my PhD student. He's now part of the instructional faculty here at Computer Science and Engineering and teaches one of the biggest classes on campus, CS183. And Bill came up with this incredibly clever idea. Now I said that if you want to show that a program can't be hacked, it's an impossibility proof, Bill turned that into what was called a constructive proof, which we can actually do. And so basically what you do is you build a system that can be hacked. You then rip out all the pieces of the system that allow the hack to happen until you can't implement the hack anymore. And then you rebuild the system. That's called the subtractive design. You rebuild the system without reintroducing those pieces in. If you can rebuild the system and it works, that's a constructive proof. The existence of that system is the proof. And that's a proof I can get behind, no math. So this was a really interesting idea. And we used it to build really large systems that weren't subject to attacks. In fact, what we did was we were able to build a system that stopped both control flow attacks, which is a very important class of attacks. We also built another system that stopped timing attacks. And in fact, over five years, Bill's PhD, we were able to stop two classes of attacks. So we thought, this is great progress. Then entered DARPA. So DARPA defined a program called SIF. And what SIF was, this is SIF mascot, Darth Maul here. And what SIF was was stopped seven attack classes in 3.5 years. So let's do the math there. Took me five years to do two. That means 2.5 years per. I got two down. They want to do seven. That means I need five more, five times 2.5. I'm going to need 12.5 years to finish this problem. But I only got 3.5. So that's a problem. That's a problem. So we knew we had to take a different approach to stop a more broad array of security attacks. And then a very happy accident occurred. I was at one of the review meetings for the Center of Future Architecture Research. And we were thinking like, oh, what we really need to do is randomize the system, randomize the system. And one of my sponsors, one of the colleagues I was working with there, I was explaining to him this idea we wanted. And he looked at me and he said, one sentence that changed my next two years. He said, that sounds like a human T cell. You should read up on human immunology. And human immunology is, in fact, a tremendous security system. Now it's a little different than computer security systems. Computer security system says, don't let anybody into this system. The human immune system is a security system that says, don't let the species die. So unfortunately, you as individuals, not so important. All of us as a collective, very important. But anyway, it's still a very powerful immune system. After he said that, I went and I talked to Jennifer Linderman back there. She works in immunology. I said, give me, what are the books? What are the books you read? What are the books you read? She said, Jane's Immunology. Read that. So I got Jane's Immunology, started reading it, and consuming all kinds of information. The human immune system is very capable. It does one thing that computers simply cannot do, which is it adapts to new attacks. So COVID-19 is a new attack. And we are adapting to that new attack right now. And when you read about it, you learn that it's really built out of what are called moving target defenses. And they're very complicated things, but I'm now going to give you a graphical representation of the human immune system. That is the human immune system. It basically randomizes itself continuously for two purposes. One, it wants to evade the attackers so that inside your blood, all the T cells, they don't share any DNA with you. And they have literally thousands of different variants, because they don't want to be attacked individually by viruses. At the same time, it's doing this reorganization capability to try and bind to the underlying attacker that's in your system. And when it does, it destroys that. And it even remembers, has a memory, a learning mechanism to remember that you did that. And so this became the pattern for how we design new security systems. And now I want to play you a video that was made by my PhD students that describes the system that we defined with this new technology. It's called Morpheus. And I think they did a great job, better job, than I could ever do and explain to you. Hopefully you can hear this. Let me start this and hopefully you'll hear this. This is work by Mark, Lauren, and Alex. In a world where software vulnerabilities are rampant and unsafe languages like C are exceedingly common, attackers are getting more power by the second. To synthesize and exploit, attackers need access to an asset, such as pointer values. Defenses randomize these values to make it harder for attackers. However, hackers have found help around these measures. But there's hope because there's Morpheus. To stop attackers from de-randomizing assets, Morpheus changes it from under them in a process called churn. Morpheus is a secure architecture targeted at stopping advanced control flow attacks, such as ROP. We leverage hardware in RIS 5 to provide two layers of defenses, displacing memory into a superimposed address space, and encrypting code and pointers. These defenses, combined with churn, significantly hinder exploits. With hardware support, Morpheus can churn sensitive assets every 50 milliseconds with less than 1% slowdowns. With information changing so quickly, attackers are left with a system that is extremely difficult to penetrate and must resort to advanced probing. Because of churn, this challenge becomes a race against a short clock, a race that is nearly impossible to win. And that's the story of Morpheus. Morpheus is about creating massive uncertainty, creating a puzzle that can't be solved efficiently by an attacker, even an automated attacker. And it's just a really powerful technique. And Mark Lorne, they built simulation environments and basically really showed the power of this particular technique. Then Alex got involved, and he was involved in a part of it, where we were actually building a physical implementation of it. Here's one of my favorite sayings. They don't call it hardware because it's easy. And so we were going to build a physical implementation of Morpheus. And we used something that came out of Berkeley called the RISC-5 rocket core. So RISC-5 is an open source instruction set. It's really kind of the latest hotness in the world of computer engineering. And we used an open source free microprocessor, which is used. It's built by many companies, and it's available. So just like open source software, we get this open source hardware, and we grab it, and we're going to basically modify it to be part of Morpheus. Now, one of the challenges with Morpheus is we use encryption to do this randomization. And encryption is very expensive. So we really needed a way to sort of decide how far up and how close to the program are we actually going to do this encryption? Because if you go all the way up to the top, it's going to be slow, slow, and no one's ever going to want to use it. So it's a well-known technique where you go as far up what's called the memory system as you can, and then you decrypt and send it up there. And so we decided at what point we were going to do the decryption, and then the team went off to implement this, and it took a long time. We went a month, month and a half. This is not happening. This is not happening. And so you see this little connection here. That's what it conceptually looks like, but the students, they called me in, and said we cannot figure out how to build this, because it really looked like this. That's what the connection looked like. Just because it's open source doesn't mean it's good. It was a mess. And the mess was hitting our timeline to deliver on a contract to DARPA, and we were freaking out. And here comes executive decision. Don't decrypt. We'll deal with a bad performance later, but that turned out to be the best thing we ever did. That was the happy accident, because it was dog slow when we got it working, but all of a sudden we had, for the first time ever, created a CPU that operates on always encrypted data. It never decrypted it ever. And imagine if that was your social security number, or if that was your date of birth, and it was operating on that particular piece of information, and if someone hacked into the system, it was always encrypted. It was a really interesting discovery. So that's where we get today's research. I want to just kind of tease it to you. Lauren, who's back there, is working on what we call a privacy enhanced CPU. This is a CPU where somebody in another country can encrypt a piece of data, and never share with you anything but encrypted data. But you can get that encrypted data, put it on the privacy enhanced CPU that's got all those randomization and encryption capabilities, and it can actually compute on that data, but it doesn't know what the underlying data is. And you'd be surprised, we're learning very quickly, that most of the algorithms that we are building can be built in a way where they don't understand anything about the underlying data. It's harder to build heuristics, it's harder to build efficient algorithms, but it's certainly possible to build a system that doesn't know anything about the data that's inside of it. It simply cannot do that. And so now Lauren's off on her PhD, and I just want to say some of the really interesting research questions that we have as we're pursuing this particular kind of problem. Like for example, how do we import information remotely without sharing a key? Fortunately, we have great friends in cryptography here at University of Michigan, and most of those problems have been solved. We just have to adopt sort of best known methods in that area. How do we make this thing work fast? Normally it takes one cycle to do an ad, and AES decrypt takes 70 cycles. And AES and crypt, these are two ciphers, takes 70 cycles. So you took an operation that was one cycle, and you put a 70 cycle thing in front of it, a 70 cycle thing after it, that can be very, very expensive. But we're finding ways to really get rid of most of that latency. And how do you write programs when you can't see the data? Virtually every program we write, searching for a string, look for the string, oh, I found it done. Well, there is no done when you can't see the string that you're searching for, or the thing that you're actually searching into. And so what we're finding is that you don't write heuristics, heuristics don't exist in this world, and the order of complexity is actually the actual time it takes to run the algorithm, because you're always doing worse case, because you never know when you're done. So it's very interesting from an algorithmic perspective, we're still the same order of complexity of traditional algorithms, but about twice as inefficient. But that's a small price to pay if we can really stop data breaches. And then I think the more interesting question is, how do we analyze data that we cannot see? Imagine if there was hospitals, they amass significant amount of medical data, and they tend not to share it with other hospitals, right? They just had their own pile. They don't share it with other hospitals because they don't want to lose their data, they don't want to expose their data, that's their intellectual property. I mean, they literally call it intellectual property. What if they could share that data in a way with another hospital or with other researchers in a way that would not expose that data, and in a way that they could do research. They could do machine learning training over it, they could do statistical analysis over it, they can do all kinds of things. Fortunately, there's another area of research called differential privacy that we can dip heavily into. Differential privacy is an area of research where it studies how do you generate a statistic that doesn't reveal anything of note about the underlying data? And we can use those same technologies in this environment to allow the sharing of data that is encrypted and allow discovery over that data. That's super exciting. And then finally, how do we make sure that we don't get hacked like everybody else got hacked? Because if we are going to build a system that can't be hacked, you got to really put it to the test. And fortunately, Michigan security is fantastic. We have some of the best hackers in the world working right in our midst, and we're going to work with them to really try to penetrate these systems. But hopefully that will be as successful as the rest of this project. So that's my story. We're having a great time. I'd love to talk to you more about it. Before I leave, I just want to make one last thank you to all the people that have helped me get to this point. And in particular, I want to thank my mentors in particular, Jack. In particular, Sherid Malik from Princeton, he's another one of my great mentors that I've always leaned on heavily. It's so useful to have good mentors, and that's why I try to be a good mentor to the other faculty that I work with as well. And in particular, my research collaborators who are many, after I got my tenure, I never did anything alone ever again, never. Why? I want to take my toys into your sandbox and play with you. I want to have, that's literally what I want to do. I want to go to other places with my special set of skills and do fun stuff. And that's really been a great thing. And I love the collaborators. I found over my career that the collaborators I love working with are the collaborators that are my friends. Because it takes a lot of trust to be in a really strong and healthy collaboration. And so that trust is really built on friendship. So I really, I thank all of you. The staff here is amazing and incredible. I spent a big chunk of the morning here with Erica trying to solve some numerical problems on a project. I always really appreciate the staff and their support and also their friendship as well. My co-founders, I've had a great opportunity in Michigan to start a number of companies. And there's something about a co-founder that is really unique, that relationship. It's deep. It's, again, a high level of trust. And you kind of feel like Thelma and Louise going over the hill, because you don't know what's going to be at the bottom. And so that's always fun. And I thank you folks as well. And of course, my graduate students, current and former, I feel very much like they are a part of my family. I want to stay in touch with them all the time. And I'm delighted that I have such a great relationship with my former graduate students even to this day. So thank you all. I also appreciate from my graduate students that they do so much work for me. I always tell people, I don't do research. I watch research. It's really the truth. And undergraduate students, Michigan has the greatest undergraduate students in the world. I feel like I always have a farm team in the lab. And I really value all of their efforts. And now my startup has a bunch of those former graduates. Undergraduates, they've graduated. And come to work with me. And it's really been a fantastic experience to go from that in the lab to graduation to out into the startup. That's very exciting. And then my external colleagues and collaborators as well. People I worked with in my centers. And in particular, I want to give a shout out to my collaborators in Ethiopia who have been very much a part of my life. I love going to Ethiopia. And if anybody wants to go to Ethiopia, and teach for six weeks, and have a life adventure, see me? Any area of engineering, we can do that in. Thank you very much. And then my sponsors, thank you so much. They pay for everything. I think one of the reasons I had a very successful center was because I truly appreciate my sponsors. And I let my sponsors know that I truly appreciate their support, both with the thanks and with the attention that I gave my sponsors. It really is important to do the things we do here to have good relationships with our sponsors. And then finally, yeah, thank you to my family over there for putting up with all the long hours and the great amount of work, which usually happens while they're asleep. But I very much appreciate it. Thank you so much for your time. Thank you.