 Hey, good morning everybody. Who's ready to talk about some FPGAs? Woo, yeah, let's get started by the... Alright, hey, I'm John Denlap. I work for a company called GDS Security, Gotham Digital Science. I do law and security research, reverse engineering, light to collect, bad, broken software, hardware, cheap things, break them, make them release that magical blue smoke. And I'm here to talk to you guys about FPGAs. More on the level of most people in the consumer sphere haven't really encountered FPGAs, but it's happening more and more. If you're really deeply ingrained in maybe military or highly secretive technologies, you might be more familiar with them, but FPGAs are kind of creeping more into the consumer space bit by bit. And if you're, say, an embedded pen tester, you might not know what kind of bad things to look for in FPGA design, what kind of things to look for when you're buying and picking FPGAs, what kind of security protections are in them, what kind of anti-tamper protections are in them, and just what the heck is an FPGA. Keep in mind that probably every slide in this talk could take up its own 90-minute talk. So keep that in mind if I move a little fast, if you're an FPGA sort of subject matter expert and you're like, hey, there's some complexity you got missed there, that's why we got to move fast. So yeah, this is for people who are a little new to FPGAs. So what are FPGAs? They are field programmable. As in you can reprogram basically the hardware itself, make your own CPU or state machine or whatever, and gate array as in a rate of gates, but not really. In most cases what we're talking about is a set of lookup tables that are transferred from configuration memory. So you have something like SRAM or flash memory that holds the bit stream for a bunch of lookup tables, and these approximate the behavior of gates. So if you imagine you have like a finite set of inputs, I'm getting feedback here a little bit. You have a finite set of inputs and outputs for the gate design and this can approximate hardware basically. There are lots of hybrid variations on this. So you can get FPGAs in a bunch of different flavors and that's relevant to this discussion because they behave differently and have different security ramifications. And unfortunately, I don't want to make the talk seem like I'm ever advocating for a particular brand talking about what to buy too much, but what security you get often depends on what you buy, how much you pay, what kind of advice you're asking for. There's different power consumption levels, different sizes, different storage methods, and even CPLD hybrids with FPGAs. You got SRAM, NFUs, prom, eProm, flash, and I think for most of you, you know what SRAM would be. You would know what an eProm is. If you're familiar with embedded devices, flash is probably familiar to all of you. You might not have run into antifuse. And NFUs is one of the first topics you might come into when you're looking at tamper-proofing in FPGA. We'll talk a little bit later about how one of the main threat models for FPGA is making sure people don't dump the design off the FPGA. So if the FPGA has its design in some kind of storage, like SRAM or flash, it's possible for people to take your entire hardware design and pull it off and flash it onto another FPGA, use it for themselves. It's something you don't want. Antifuse FPGA, they basically use fuses that are broken to encode the design, which in most cases means that there's absolutely no way to just pull it off short of looking at it with like a scanning electron microscope. Even though some antifuse FPGAs actually do have a readback function that reads back the design, which is funny, like they naturally wouldn't have it. People went out of their way to implement it, which is kind of goofy. If you're wondering what FPGAs are used for, like practically everything, you see them a lot in the self-driving cars, military technology, routers, big use, especially implementing stuff like tertiary logic servers now. Intel has a whole sort of product line of servers coming out with FPGAs. If you guys aren't familiar, Altera, one of the bigger FPGA vendors, just got bought by Intel. So what you used to see is Altera is Intel now, and it's mostly for that server integration. Of course, the crypto mining people love them. In case you didn't know, the big value add for an FPGA is you can basically make your own ASIC. If you want to implement crypto in hardware and have it be more performant than software crypto, you can do that with an FPGA. Or anything really that is more performant in hardware than software. That's possible. So what kind of threats are we going to look for in FPGAs? A tax against the hardware itself, tax against the HDL implementation, sort of environmental problems, and a tax against the synthesis pipeline. We'll talk about that in just a second. And something to keep in mind when we're talking about FPGA threats, FPGA problems, is that we're working at this kind of intermediate level where we are actually talking about logic, but it's in the context of hardware, and that middle level isn't very easy attack if you're not a human being thinking about the HDL implementation. You'll see that in a second. So what is HDL? It's hardware definition language. We're talking about stuff like Verilog, VHDL, and sort of a way to program up hardware. So we can take stuff like wires, buses, clocks, and define them dynamically. Here's an example of part of the Verilog CPU. This is actually an ALU, Rhythmic Logic Unit here. And you can see, you have all the adding, XOR, what not, defined by the software, but it gets put out into hardware. It gets put into actual gates and registers and whatnot. There's a different way of thinking about things. One thing I always thought was cool about HDL is that you can do things like actually reason about literal clock cycles. So you can say every rising edge we do this, or every two rising edges we do this, which is pretty freaking cool. But it also sort of inserts a whole set of problems of when do we do that, when do we not do that? Are we keeping our timing copacetic? If you're not familiar with synthesis, it's basically the compilation, if you want to think about where we take our HDL design and turn it into something we can throw on to the FPGA itself. Here are the basic steps of it. Too deep into it, I could talk about synthesis for like a whole one hour talk. But we have like a register transfer level, some optimizations, and then mapping on to the actual chip that we're going to use. And a couple of steps that mean that we're going to plan out where on the die stuff is going to go. And that has security ramifications, where we put our crypto logic on the chip relative to other stuff might affect the viability of things like differential power analysis attacks. And that is one like liberating, yeah, dangerous thing about FPGA design is that we can say spread storage across the chip, make it hard to read, or we could spread RAM across the chip, spread crypto stuff in different places. But if you don't do that or just let the chip player do it for you, that might cause some issues. And the final step here we're going to talk a little bit more is bit stream generation. And if you want to think about the final, you think of firmware, the image of the stuff that gets put into the FPGA, that's our bit stream. And so if you want to start talking about reverse engineering FPGAs, you want to start talking about bit stream reverse engineering. But before we get there, it's important that this synthesis pipeline be secured, and there's a lot of thought going into how you can make sure that all this stuff happens without an insider threat between each steps. Let's see, another topic that will come up when you start reading about FPGA security is physically unclonable functions, which is an interesting crypto idea if you haven't run into it. It's basically the idea is that we're going to get maybe our crypto keys or some kind of authentication logic based off of factors specific to this exact bit of silicon. So things will be different on each version of the device. So we might do that by measuring propagation delays. We might do that by measuring voltages, capacitance, parity, noise. There are attacks against these, but basic idea is that this stuff will vary chip to chip and we could do something like derive a crypto key based on these differences. One of the advantages being that we don't have to put it on the device somehow. There are machine learning attacks against these, you should look into them, and there's also bypasses against those machine learning attacks. So how do we prevent people from disclosing our P from the FPGA chip? One of the problems, as I said before, is that when we use FPGAs we kind of have to keep our IP in a format that's usable on the device, be it SRAM or Flash or whatever. So how do we protect it? Or how do we get it off the chip? We might look to grab it during configuration of the device when the device is being flashed, as it were. A lot of FPGA devices are set up in such a way that the IP is transferred from storage at boot time. Every time the FPGA is booted up, it gets transferred across and we might try to attack that if it's possible. Or we might try to rip it directly off the storage medium itself. So a lot of people call this class of attacks bit stream cloning. The most easy thing to do for an attacker, the most script kitty thing, is to abuse read back functionality of the FPGA. And a lot of times if it's not like a highly secured FPGA, this might just be wide open and you might be able to connect it to Quartus Prime or whatever IDE you're using to build your FPGA software and say, read me back the bit stream and it will do it for you without any kind of authentication. Something to look into if you're building an FPGA laden device. Usually there's a bit you can set to prevent that, but on some of the lower security devices, that's not a thing. And obviously people are going to tamper that. The next level would be trying to middle the bit stream somehow, right? But encrypted bit streams are a thing. And then the other thing to watch out for is like sort of integrated devices. One way that hardware designers try to prevent bit stream cloning attacks is to put the storage and the logic on the same die. So you have to decap the chip and do something much harder physically in terms of reverse engineering to get the bit stream. Here. This is from, I think it's from Altera. The instructions for performing a read back on one of their chips. So on some of the, oh no, that's Zilinx. Yeah, there we go. So it's super easy if it's not disabled. There are also ways to activate the read back by tampering the chip itself. But once you have the bit stream that's not all roses, you might have to reverse engineer the bit stream. And this is a big problem because every single chip in every single manufacturer has their own bit stream format that is not documented, that is proprietary. And there's whole groups of people who specialize in trying to reverse engineer these bit stream formats. It's not easy. We're talking about like a month of reverse engineering. It is security by obscurity, but it tends to work kind of okay. Another thing people try to do is bit stream encryption. But then you have to deal with like how do you get the keys onto the device and the pane of that. And protecting device from differential crypto analysis attacks. And bad crypto too. There have been many cases of FPGAs that inherently use AESCBC mode, which if you're not familiar has a block swapping type attack, which you could use to decrypt information sort of selectively. It's definitely not a perfect way to encrypt things. And like I said before, you have to worry about how to get the key onto the device. Most FPGA sort of working environments have something to help you with this, but it can be kind of a pane manufacturing process wise of like finding a way to like flash the keys on and keep everything go pathetic. For instance, keys usually put on there via JTAG. If you leave the JTAG activated, that could cause this own set of problems. Another thing people try to do security wise with FPGAs is implement true random numbers. You might get these from propagation delays, oscillator jitter, oscillator frequency, phase locked loops, or a dedicated hardware peripheral that does this. Big thing at FPGAs is sort of one of the higher end things you pay for is little peripherals that do certain kinds of computations for you. Like you might get a special DAC based multiplication unit, for instance, something you would pay extra for. So you can get things like that for crypto. Also, metastability, which if you're not familiar, is like if you have a one in a zero and a threshold for that metastability is when we have values that live in that middle area, which in terms of like a viable FPGA design is considered to be unacceptable. But if you do it on purpose, you can use that source of randomness because you have this, we're not sure if it will value eight to one or zero. It's basically truly random. So if we're trying to get all this bit stream off the FPGA, we're going to run into problems with anti-tamper devices pretty quickly. Stuff like fuses, anti-tamper fuses, tamper resistant flash memory cells. And logic placement designs are meant to frustrate the reverse engineer. So here's a micro-semi smart fusion device. And this is one where we have an integrated design where it would be very hard to middle of storage. And here's going over some micro-semi's protections that they have on here. I thought it would be good to talk about anti-tamper and show some of the data sheets because it gives you the idea of how common it is on FPGAs. It's actually kind of a very responsible thing of the industry. It probably has something to do with the fact that they're widely used by the military. So we have multiple fuses. Actually, you have an array fuse, a security fuse, a program fuse, and a probe fuse, all meant to break the device if it finds that reverse engineers have been mucking about. And if you're going to try and pull that design off the FPGA, you have to deal with all of that. They actually have similar things for their flash memory, so if you're trying to pull stuff directly off the flash, you'll have to deal with that as well. I like this page, though. Are the keys secure? I just put that up because I found it to be funny in their data sheet. Are the keys secure? Yes. Zinlinks is kind of the kings of this. They have this huge scary list of hardening features that you can get in your FPGA. The thing to keep in mind is all these features aren't on all FPGAs. And if you're trying to make a consumer device with an FPGA on it and you want to protect your bitstream, then you might want to read your data sheet and see what exactly is there. They have stuff like logic you can put on there that will disable the device if it feels like the JTAG is being tampered. So you put your JTAGulator on there and try to brute force it and all of a sudden you break your device. It's a good thing to know before going in with these things. See anti-readback. We'll talk about ICAP in a second. The thing to keep in mind is while these features are really common, as you go into the cheaper, more consumer-oriented devices, especially the older models of devices, they sort of disappear. It's like a recent improvement in some ways or especially for the cheaper devices. For instance, at home in my personal collection of FPGAs, I have a Cyclone 2 device that I bought off eBay and a Cyclone 3 device that I bought off eBay. Cyclone 2 has none of these features. Cyclone 3 has about half of them. So read your data sheets. Another thing to watch out for is disabling and protection of the ICAP as an internal configuration port. Cool thing about FPGAs, in case the possibility hadn't crossed from high, they can configure themselves. You can have a self-reconfiguring design that changes itself and mutates. There's been various schemes for making a highly secure design based on a self-mutating FPGA where the locations of things swapped around, the timing delays of things swapped around, but then you have to worry about is there a way that attacker can hijack my ICAP? And I think, listening here, there are some ICAP protections in the anti-tamper list as well. Here's another threat, is damaging systems connected to the FPGA. There's been some research into this as well. So say that your FPGA is secure from midstream attacks, what if the attacker just wants to misuse the logic on your FPGA to destroy things? So it's not uncommon for FPGAs to be connected to, you know, be the glue logic for various high power systems. And people have come up with attacks where you cause it to overvoltage or something like that and cause a small fire, melting things, whatever. So this particular paper, they call it the FPGA virus, which I think is kind of funny. They propose an attack they call Melt, which is like right on the nose. And the main idea with the Melt vulnerability is that they're altering the bitstream, which on a properly set up synthesis pipeline and FPGA shouldn't be possible. But altering it in such a way that unacceptable voltage comes out the other end of the FPGA and causes some serious trouble. And while that might be unlikely in a properly configured setup, there are situations where you have unthoroughly defined state machines, stuff like that, where you might get out an unintended result from the FPGA somehow. So it's something to look into what is your FPGA connected to in our all possible values of output considered. What if you want to just attack the FPGA itself? What if we don't care about dumping the flash, maybe we're going to reverse engineer it physically? Which is something that FPGA manufacturers definitely worry about and there is definitely a whole body of research on how to do. There are techniques like focus ion beam measurements, scanning electron microscopes, X-ray analysis is a big thing, thermal analysis of the running FPGA. And depending on how much you want to pay you can get FPGAs that are like hardened against these things to some degree or another. And we might also try to tamper the FPGA by placing it in a temperature it's not expecting tampering with the clock, tampering the voltage using ionizing radiation I'm serious is a big thing. And you want this FIPS document here goes over some of the hardening you might want to check out for that. We'll get into those attacks a little more specifically in a second but you would solve this with a more robust circuitry, essentially redundancy, voltage regulation, secure cryptographic implementation and it's like a DAC based shut off or in the temperature is wrong in fact I think Zilling's back on the other page has like some kind of anti-temperature tampering thing and then there's like physical isolation which Zins calls isolation design flow and it's the idea that if you have some kind of secure function that you think is really important you don't want tampered you can put it somewhere else on the chip and have it live in its own land which is interesting good idea it's not automated in most FPGA programming environments and then idea that comes up a lot is the single event upset when we're talking about radiation this is a big deal so usually it's in the context of people in a high radiation environment but something an attacker could try to use to get that a chip to glitch essentially but FPGAs are especially vulnerable to this situation where some ionizing radiation causes a bit to flip or some memory to change NASA has a document on this there's a whole bunch of suggestions for how to secure the FPGA against it when we talk about soft IP in a second there's a bunch of companies that offer soft IP that's supposed to secure against it but you know if you're in a high intensity threat environment you might want to think about radiation improving your FPGA well here's a guy at VAE doing radiation testing which I think is freaking cool so FPGAs also have like something like a library and these are usually called IP blocks and they're H-Shield Designs provided by third parties and they're usually provided by your device manufacturer, your ID whatever you're using like Hortis Prime will usually come with a bunch of these from Intel or Xilinx or whoever it may be and it can be really ornate stuff like entire CPUs that you can customize to your needs you know here's just a drive at home here's the Altair IP blocks menu and you can put in all these functions or just drag and drop stuff is complex to CPUs, RAM, state machine and you know it's an overlooked thing if you have a design with FPGAs you might want to get your IP box checked for vulnerabilities as well what about cryptographic attacks the same kinds of DFA attacks that people worry about for other kinds of embed systems are a big deal for FPGAs as well so you know timing based attacks, side channel attacks power analysis glitch attacks not much to say here that's different than other stuff other than you might you might take advantage of the routing and physical placement properties of the FPGA to try and counteract that we talked about glitch ejection before and getting towards the end here talk a little about security tools for FPGAs and what you know some more HDL oriented analysis might entail you know one thing to keep in mind with FPGAs is really cool is stack analysis works really well on FPGAs actually some of the stuff if you're familiar with like SAT solvers or formal methods or proof based security measures this is very mature on FPGAs but for different purposes than maybe security people are used to it's oriented more towards defining correctly the electrical properties of the FPGA making sure that we don't have things like that metastability we talked about clock can run at a stable rate without causing propagation delay based errors it's not so much for security but it could help you with security you know usually stack timing analysis tools can do things like detect unconstrained paths that might be a security problem it might not be but they'll come with your design suite right but there are problems at the intermediate level with finding bugs in this you want to find logical bugs and it's not going to know unless you help it along about security errors with data flow right the stack timing analysis can't tell for instance where your data is going and which of it is supposed to be secure can't answer questions like can the user write to arbitrary memory none of this stuff is even like in the purview of that type of analysis so this is really good stack analysis but it doesn't fulfill the needs of what a security engineer would be looking for necessarily all the time I think one of the coolest in getting along to the same idea everyone's having about how to do security stack analysis on FPGA is this tool from Cornell called SecVerilog and the basic idea is one of the biggest problems on FPGA design security is where that security information is flowing and so SecVerilog is basically an extension to the Verilog language where we are able to imitate which bits of our data are secure and it will trace that by injecting Verilog at various levels and areas and give us an idea of what's going on and you see a lot of transpiler ideas in this zone to give you an idea of the kind of ideas like a human would find that the stack analysis tool won't we're looking for like bad state transitions data flow to unintended areas timing sensitive problems places where people should or should not be checking the clock or they're doing so incorrectly race conditions places where you're assuming you're in a synchronous mode of of operation but it's actually asynchronous or the opposite A-listing issues and that's basically I don't know how we're doing on time let's see oh, pretty good so since we're good on time do we have any questions from the audience? dead silence how many of you guys have worked with FPGAs before? oh cool so you guys are all like oh this was too high level for me okay I got you it's okay alright well I'm out then well question yeah so no I haven't seen that although I don't think it would be hard to put together some of the schemes I've seen proposed are taking something like I know that there's a GNU VHDL compiler that basically turns VHDL designs into native code for Linux I don't think that would be hard to instrument in the same way yeah most of the literature is in the form of like alpha particles we I don't know the number off the top of my head but there's an interesting it might be the opposite of what you think like less is more there's sort of an annealing process that can go on when you slam the chip with like lots of radiation as opposed to a little bit intermittently the chip will be more reliable more reliable with more radiation than less and that's one of the problems if you look into there's a whole lot literature like how unreliable the industrial testing can be on that if you're not careful well have a good day guys