 Cool. Well, thank you, everyone, for coming back. Great to see we're gaining people, not losing them. So I'm going to be introducing today, DARPA. Is this SSID or Scythe? Is it called Scythe? Okay, really? Okay, so I'm gonna be introducing Dr. Linton Salmon of the DARPA Scythe program. He joined the Defense Advanced Research Projects Agency as a program manager in September 2014. Prior to joining DARPA, Dr. Salmon spent 15 years in executive roles directing development of CMOS technology at Global Foundries, Texas Instruments, and Advanced Micro Devices. Before joining Advanced Micro Devices, Dr. Salmon was vice president for research and technology transfer at Case Western Reserve University and an associate professor of electrical engineering and physics at Brigham Young University, BYU, where his research includes CMOS processes, micro-battery research, packaging, and MEMS. Never sure which one to pronounce, anyway. Anyway, thank you, thank you for joining us, and I will see you in the mic. Thank you, I appreciate that. Can everyone hear me okay? One of the requirements for DARPA program manager is you have to have a booming voice so that they can hear you anywhere. So I'd be surprised if you couldn't, but if you don't, somebody just waved at me. I can get too engaged sometimes. It's a real pleasure to come and talk to you a little bit, not just about the Scythe program, and yes, we do know what that means. And I'll tell you what the acronym means in just a second. But actually, what it's doing here at DEFCON and why we're here, because I think that's the important thing for all of you to hear. Although I'd love to talk about the program, another thing you should know about DARPA program managers is our programs are our children, and we all love to talk about our children. So any of you that have questions, I'd love to answer them. So the outline of the talk, I'll talk a little bit about what Scythe is and probably something about what it isn't. I'll then talk a little bit about the demonstration we're doing here at DEFCON and then why we're here. And then I'll just put it together a summary. Be a fairly short presentation and then I'll open it up for a little bit of questions if there's time. So Scythe stands for system security integrated through hardware and firmware. And yes, I did kind of make that fit inside the acronym. Yeah. In case any of you were wondering whether we thought of the name and then the acronym, no, no, we think of the acronym and then the name. But at any rate, I like the name for obvious reasons for those of you that have Star Wars background in the room and those of you who don't, I don't know what you're doing here, but at any rate, the purpose is to, as it says up there, the goal is to develop hardware design tools. For those of you that have worked at all, most of you probably haven't, we're building integrated circuits. There's a set of design software that's used to build these extremely complex systems. I just spent the last couple of weeks at workshops where people are talking about six to 60 billion transistors in a given piece of silicon. So the concept is that those are extremely complex. If you come up to somebody and say, hey, I want you to build in security, they go, give me a tool, give me some way I can do this, or I won't do it. So the goal of the program is to develop those tools to provide security against hardware vulnerabilities that are exploited through software. As I was explaining to someone just this past week, another way to think of these is I'm trying to protect them against remote hacks. There are hacks that happen because you have physical connection with the hardware. You can probe it, you can power glitch it with the power supply. You're all familiar with those kinds of protections or those kinds of attacks. Those are not the specific goal of SIF. It's to protect against remote attacks which I happen to think are extremely important. All of them are, but those are the particularly important ones because the person doing the hack does not have to be anywhere close to the location of the hardware. And they're exploited in electronic systems of any type. We're not focused just on the Department of Defense hardware, that's why one of the important reasons why we're here. The goal of the SIF program is to increase security throughout the microelectronics enterprise. Whether you're talking about a small IoT device, my favorite one is the toilet seat detector that detects whether the toilet seat's up or down. Those of you that have children understand why that's important, but or whether you're talking about a high-performance computing system that costs hundreds of millions of dollars. So our goal is to use open source as much as possible specifically to encourage that kind of use across the electronics enterprise. We're building, because as you all know, you can have all the kinds of theoretical estimations of why something's protected. Unless you put it out for people to actually try and attack it, you can't really know whether it's protected because quite simply no group of people can have the corner on creative ideas of how to bypass security mechanisms. You need to have a broad range of extremely creative people to do that. To make that happen, we have to have a hardware demonstrator and that's what is being built in SIF. We're using an open source microprocessor core called a RISC-5. For those of you that don't know what that is, all you need to know is that the core itself, unlike an Intel or an ARM processor, is completely open source and anyone can know what the RTL is. We have six teams developing 15 different secure processors that span all the way from an embedded 32-bit processor all the way up to a very high-performance speculative of execution. Once again, those of you that know, know that speculation was the source of the spectrum meltdown attacks that were trapped recently and only occurs in high-performance systems, but by golly, you're just as susceptible if you have this little IoT device as well. Each team augments three baseline RISC-5 processors to make them secure. I've already talked about the different kinds. The teams employ a wide range of security approaches, including things like metadata tagging, tagging the data and the pointers, secure enclaves, novel cryptography, as well as machine learning to detect attacks. And each team augments what we call a baseline RISC-5 processor to make it secure. So what are we doing differently? Pardon me, you're gonna get some animation. Hopefully I keep up with myself. So what we do today is we do a patch and pray is what I call it. I think that somebody got together and said, well, hardware, you can't go fast enough at low enough power, so we'll take care of security in the software. And my colleagues that were in this mythical meeting that were from the hardware knew a good thing when they heard it and they ran out the door because they didn't want to have to handle security because it's hard. I think all of us recognize that this is a problem which is huge across the enterprise. And by the enterprise, I mean everything electronic, which in my bias opinion means almost everything, that we need all the help we can get and just asking software to solve the total problem isn't appropriate. And one of the reasons is demonstrated here. So if there's a software weakness, it comes in exploits a hardware vulnerability like buffer overflow, it's the most common one. So you have an attack, you have a patch, you go on to the common vulnerability exposure index at NIST. This is the attack on that software and here's the patch to fix it. But the problem is somebody finds another way through that same software to exploit that same hardware weakness and they do it again. Now you get another software patch and each time you get a software patch, of course there's this period between the time it is actually employed and it's actually protected against for people to break in. And a lot of this community learns very quickly how to, I mean there's even, I saw something over, it was called, someone got a contest called buffer overflow. I mean buffer overflow has been known as a weakness for at least 20 years and yet there's still buffer overflow attacks all the time. The goal of the SIF program is for that and other hardware related vulnerabilities to block them at the hardware. So the result of this is in 2015, this is old data, I'm sorry, I haven't updated it, 28, 800 vulnerability instances with 28 software patches to fix it. What we wanna do in SIF is when you have an attack, instead of protecting it at the software level, now remember this is only attacks that are hardware vulnerabilities. It doesn't include leakage between virtual machines that are all up in the software level. But for things like buffer overflow and other similar kinds of hardware vulnerabilities, we block it at the hardware and then when the person comes up with the next attack, it's still blocked by the same mechanism. So theoretically, instead of trying to patch all those 2800 vulnerability instances, you instead go to seven vulnerability classes with seven basic approaches to how to do the protection. This means you have a much more tractable problem. Now, I repeat, this is not all attacks. This is, depending on how you calculated, if you took the number of CDEs in 2015, it was 43%, doesn't talk about how serious they were, just the numbers. So at best, we're talking about fixing 43% of the problem. But I would argue that fixing 43% of the cybersecurity problem, if we could do that, boy, I gotta get something for that. And all my performers ought to get something. That's awesome. But I repeat, please don't misunderstand me. If anybody says, SIF is going to make an unhackable computer, you know right then that they're not a performer because that can't be right. And they know it's not right. We are trying to make an important step forward in how to make electronic systems secure. So once again, we address the hardware vulnerabilities at their source and reduce the protection domain, the attack surface, if you will, from thousands of independent software patches to a few basic hardware approaches. At its core, that's what SIF is about. So, why is the SIF program using election security as a demonstrator? The first thing I'd like to say is it is a demonstrator. The purpose of SIF is not to make a secure voting system. A, that isn't what DARPA does. B, that's not the purpose of the program. And C, it really isn't in the purview of a technical program like SIF to do it. Can everybody still hear me okay? Okay, good. What we are trying to do is to demonstrate SIF capabilities in a given application. And the question would be, well, why did you pick voting security? Well, there are three major reasons that I had. First of all, it can be discussed in an open forum. Remember DARPA, the first word in DARPA is defense, advanced research projects agency. I have a bunch of applications that are very important that if I went and addressed them, I could never discuss in this crowd, let alone any other crowd that's open. So, one thing is I can discuss election security openly. Second, it attracts all of you. This room is full. If I was talking about protecting a, you know, I don't know, a small IoT device, you know, maybe one or two of you would be interested, but it wouldn't fill this room. And the third reason actually is it is a critical national infrastructure. There are lots of other ones, power grid, you know, transportation, the financial industry, but election security is a critical national need. And we'd love to be a part of helping to solve that critical need. We are not the solution, but we hopefully will be a part of it. So what are some other advantages? Well, if you look at this particular architecture, and by the way, we did not generate that architecture. It's an open domain architecture called StarVote. If I remember correctly, it was actually motivated by a particular municipality or county in Texas. They went and put together that application. We're just piggybacking on that completely. We're not trying to replicate it or expand on it or improve it. We're trying to implement it. And the reason we want to is if you look up there, you'll see a bunch of P1s, P2s and P3s. Those are those three different kinds of processors and embedded 32-bit microcontroller, small, a 64-bit in-order execution, medium, and an out-of-order execution speculative, super scalar, think high performance. And if you look at the parts of the StarVote system, it includes all three of those processors. And that's a critical thing for us to be able to do. We want to exercise all of it. And the system architecture is already open source. We don't have to develop it A and B. It's already open source, so it's available to all of you. Now, what we'll do in 2019 is not the whole enchilada. I wish it could be. I would have loved to have been here where we could have shown the entire system. What happens is this was not the idea at the beginning of SIF. SIF started about a year and a half ago, so it's in the early stages of the program. And therefore, we only have a few P1 processors that are complete, not any P2s or P3s. So what we'll be talking about is only the smart ballot box. When you go into the voting booth and go to the portion of it that's doing the SIF processors, you'll find it's only the smart ballot box. But it is the smart ballot box. It's open for attack. In 2020, however, next year, we plan on doing this again. You can call this phase A and that's phase B. Then we hope to have all three kinds of secure processors. We hope to implement the entire system and an entire software stack that exists on top of that as well. But will also not be provided by the SIF program by a separate activity funded by a separate organization. So this is the other thing I wanna make sure is when people walk over, they go, wait a minute, I saw this whole star vote system. How come it's not implementing the whole star vote system? It's not because we don't think it's a good thing. It's because we don't have the processors to do it yet. We're trying to walk first and then we'll try and run. This is such an important forum for us to be able to present this so people can make the attacks. Let me ace into ad. There will be that P1 processor which includes the SIF protections that people will be able to attack and show us any and all problems with it. So this is the actual demonstration. You'll see that this obviously wasn't from DEF CON because I submitted these slides to get approved a long time ago. Well, not long enough, but anyway. It's early in the SIF program. The methodologies are on P1. So there'll be a smart ballot box. The important thing to recognize is that smart ballot box. It's a box you can see at the bottom that takes your vote in. It has a button that says acceptable ballot or I want to reject that ballot. It has a statement. It reads the ballot through a barcode. That's the only thing that's secure. The actual ballot marking device, which is the one up above, is just done with COTS hardware. So it's no more secure than anything else would be. And we did this all out of cheap COTS parts because after DEF CON, and after DEF CON 2019 and before 2020, we're going to do what's called a university road show where this will be taken by Galois around to a variety of universities and public forums for people to attack as well. Let me see. Okay, so that was that one. So why are we here? So the program, what the program is doing at DEF CON. We're here to develop tools and methods to secure hardware against malicious attacks. The program is building prototype electronic hardware, in this particular case, risk-fied microprocessors. The program needs the best hackers in the world to provide a real-world evaluation of the hardware. And I can't think of a better place than here at DEF CON to do it, and I can't think of a better group to set up that evaluation than Galois, for Portland and Sith. The program is using a secure voting system as a demonstration vehicle. Once again, don't confuse the demonstration vehicle with the program. It is only a demonstrator for the program, and the demonstration is limited in 2019 because it's early in the program, but it will be more complete in 2020. And after DEF CON 2019, there'll be other open forums. Perhaps even more importantly is what we're not doing at DEF CON. We're not building a deployable election voting system and we won't in 2020 either. That isn't what DARPA does. We're building a prototype open-source system. We do projects when the project is over, we're done. We leave a toolbox for other people to use. We're not solving all election security issues. I know this group wouldn't think so, at least I hope this group wouldn't think so, but believe it or not, there's some people inside the beltway that think that this is an easy problem. I don't know where they got that idea. We're limited to the security of the microprocessors used in the election system. That's what we demonstrate. We are not proposing an election system architecture. We're not getting into the metaphysical argument about the use of paper ballots and you'll hear lots of those wonderful, appropriate, personally very interesting arguments. That's not what we're about. We're implementing the star vote system. We're not proponents for it, we're not changing. We're not proponents for it, we're not trying to change it. We're just trying to use one that's open and available. And we're not recommending any election policy. None, no, none, none, no, no. Everybody know what no means, nine, no, nix, whatever. We are focused on hardware security only. That's really important because DARPA is not a policy-setting body and if I ever tried to do anything like that, well, suffice it to say I do not look good in orange and we'd be able to see that. So, we're not recommending any election policy. There are other organizations here that may have that responsibility. We absolutely and totally do not. So, what I'd like to say is we are very pleased to be here. I'm very appreciative to the voting village to giving us this opportunity. I'm also very appreciative for all the work all of you do to help make elections more secure. As a personal person, I'm very thankful for that. As a DARPA program manager, thank you for letting us come and do this here. We welcome the opportunity to expose our secure hardware and let everybody go at it. By the way, a good result is nobody finds any weaknesses but also a good result is they find weaknesses and then we can go back and fix them. It's a lot better to fix it. The way I like to say it is you wanna hear your bad points from your friends, not your enemies, right? So, in this case, if you guys find a weakness, that's a lot better than if it's in some critical infrastructure and we find it. We expect secure SIF hardware to be a crucial critical part of national infrastructure security, including elections. And we're welcome, we look forward to feedback on our technology. And we really appreciate the effort you and many other people around here will spend looking at and breaking our hardware. Thank you very much. Is there time for any questions? Cool. Yes. So, what I was talking about was virtual machines which is a software construct where you build different containers and software. And one of the problems with that is the number of connections between those containers is, I don't know. I'm not a software guy but it feels like it's infinite. It isn't, obviously. It's just I prefer wires to software code. That's my bias. So it doesn't help with that problem because we assume, we don't fix software in general. We fix hardware. And those virtual containers in software are isolated by software constructs which we can't address. However, I will absolutely say that the hardware assists with compartmentalization. A lot of what happens in those performers is they in hardware separate containers. Now that is something we can do is build hardware-enforced containers. You can get that in current processors as well with secure enclaves like SGX or Trust Zone. SGX is an Intel. Trust Zone is an ARM cores. But when I made my statement it was about software constructs. Does that help? There was another hand. Yes. So, we have classes of vulnerability for example, buffer overflow. It happens because, the way I describe it is, hardware, and I designed and built hardware for 15 years in commercial industry. And we just tried to make sure that any software command got executed as quickly and at lowest power as possible. We assumed if the software asked us to do it, I was okay because the software was responsible for the security. What we do inside of Scythe is we actually go check certain activities. Buffer overflow should never be done, period. It's done in regular non-malicious sorts. In fact, one of the things we found out is free RTOS can't load unless you do a buffer overflow which is, it's a real-time operating system. But it shouldn't do that. And so what happens in the hardware is we can check to make sure, do bounce checks, to make sure that it never exceeds the buffer. That if you have a buffer overflow, you get an error. It just doesn't overflow the buffer, for example. Or if there's a permission that's supposed to be done a command comes in off of an IO port that wants root access. We can say, no, that's not permitted. Anything that comes off of an IO port is labeled and I'm gonna use the colors, but blue and blue can't get to red regions of memory which are root. Yes, not necessarily. If I look at memory security, it isn't as simple as we protect against overflows. We also protect against return-oriented programming. Things where you're trying to do stuff with, let me put it this way. There are pointers and there's memory and we have rules on how they can interact. That applies to buffer overflow. It applies to a lot of other memory problems. One example might be something that's called row hammer. If it's physical for those of you who don't know, that's fine, but it's where you physically, through the network hammer, a particular portion of the memory until it flips and then that gives you visibility. That would be covered by the same technical method that's used to stop buffer overflow would also stop row hammer. So it covers a variety of different memory security problems. At the worst, you'd have to write a new set of rules on how they interact and that's why it's got firmware in the title. Low-level software that would change how the hardware works. Yes, in the back. So any updates, I mean, this is a critical problem, is anytime you build in hardware, if you wanna update hardware per se, in other words, reroute the wires, it's another year. You gotta go back and refabricate it and everything else. So we use firmware, low-level software to actually do those updates, to address, for example, we find Spectre and Meltdown, for example. If Scythe had been in place, we would have had to change the policies to make many of these systems work. That would have to have been done through firmware and of course that opens up another attack vector which we have to protect through other methods. Yes, maybe it would be easier, I don't know, but it wasn't obvious to us six months ago when we started this. So we just picked it, I must admit, I am not an election security expert by any stretch of the imagination. If you try and talk to me about verification of six billion transistors or how to build them or how to get transistor uniformity, I'm your man, but so we picked it, I would argue it's got some big advantages to it that I can see, the number of processors that you use that it's described in papers, so you don't have to worry about that. If there was another system that would be more effective, sorry, that's the best I can say. Yes, so those of you that know about it, we talk about power, performance, and area, that's a circuit term, it's called PPA, the whole industry knows about it. We want to call P-Pass, and it's power, performance, area, security, and software compatibility. So the requirement for all of these performers is that they can run software, they can run the software, legacy software on their code. The worst they'd have to do is recompile it, no manual intervention, but it won't necessarily take advantage of all of the security features. Sometimes the programmer needs to be aware of those to utilize them. So the software will run for maximum security, you may have to rewrite portions of it, or recompile it. Yes, the gentleman standing there. So we're using off the shelf processors only this year, simply because we don't have the more complex processors ready and sith. In 2020, the intention is for all of the processors in the system will be those secure processors. We have time for maybe one, maybe two more questions. Yes, here. So I think the quick answer to your question is yes. I think you caught the essence. However, I would say in 2020, we plan on using a secure software made with a different program that I don't, not at DARPA, that I don't feel comfortable talking about because it's another program. Electronic systems are not secure with secure hardware. It takes secure hardware, it takes secure software, it takes secure combination of the two, and whether that's election security or at a high performance computing or that mythical toilet seat detector. I mean, it's the whole system, it's the whole enchilada, and we recognize that. We just can't fix it all. So sith is focused on the hardware, but our intention in 2020 is to have software that will merge with that and make an entire system that's secure. Well, one last question. So what I would say, I hope I never implied that we would do away with all software patches. If I did, I hope you would run me out on a rail. No, what we're trying to do is reduce them dramatically. So trying to take those 2,800, by the way, there were 2,800 that were listed with hardware vulnerabilities. There were a total of 6,800 patches for software, and my argument would be 4,000 of them remain. What we're trying to do is take those 2,800 and pull them down. I believe, I don't know what the number is, but a large portion of the successful attacks in the past few years are still buffer overflow. If we could take care of buffer overflow and it could not happen, you still have software patches but you don't have nearly as many. Okay, so what you're saying is there's malicious buffer overflow and there is programmer bugs overflow. Some of our programs actually will detect, in some ways I say they may even be better as debuggers than they are as security, because they'll find this unspecified semantics is what we call it, that shouldn't be done but is done. So in some cases, that may happen. We may have software that can't run because it is inherently insecure, and then we'll have to decide what to do when that occurs. The purpose is that we will be able, we have a lot of legacy software in DoD, and it costs billions of dollars to rewrite. So what we first of all say is first do no harm, make it so they can run that buggy, insecure software that's no more secure or very little more secure than it always was, but then give programmers an opportunity to make it very secure. That's all we have time for. I think I got a sign, I got a hook, but I'd like to just say I'll be happy to talk with people afterwards. And once again, thank you very much.