 Speaker's way is paved with broken trust zones. He's no stranger to breaking arms equipment or crypto wallets or basically anything he touches, it just dissolves in his fingers. He's one of Forbes 30 under 30s in tech and please give a warm round of applause to Thomas Roth. Test, okay, wonderful. Yeah, welcome to my talk, Trust Zone M, hardware attacks on ARM V8M security features. My name is Thomas Roth. You can find me on Twitter, I'm at Stack Smashing and I'm a security researcher, consultant and trainer affiliated with a couple of companies. And yeah, before we can start, I need to thank some people. So first off, Josh Datco and Dimitri in the dust pass-off who've been super helpful and anytime I was stuck somewhere or just wanted some feedback, they immediately helped me. And also Colin O'Flynn who gave me constant feedback and helped me with some troubles, gave me tips and so on. And so without these people and many more who paved the way towards this research, I wouldn't be here. Also thanks to NXP and Microchip who I had to work with as part of this talk and it was awesome. I had a lot of very bad vendor experience but these two were really nice to work with. Also some prior work. So Colin O'Flynn and Alex DeVar released a paper, I guess last year or this year, on device power analysis across hardware security domains and they basically looked at trust zone from a differential power analysis viewpoint. And otherwise trust on M is pretty new but lots of work has been done on the big and all real trust zone. And also lots and lots of works on fault injection and would be far too much to list here. So just Google fault injection and you will see what I mean. Before we start, what is trust zone M? So trust zone M is the small trust zone. It's basically a simplified version of the big trust zone that you find on Cortex-A processors. So basically if you have an Android phone, chances are very high that your phone actually runs trust zone and that for example, your key store of Android is backed by trust zone. And trust zone basically splits the CPU into a secure and a non-secure world. And so for example, you can say that a certain peripheral should only be available to the secure world. So for example, if you have a crypto accelerator, you might only want to use it in the secure world. It also, if you're wondering what's the difference to an MPU, it also comes with two MPUs, sorry, not MMUs MPUs. And so last year we gave a talk on Bitcoin wallets and so let's take those as an example. On a Bitcoin wallet, you often have different apps. For example, for Bitcoin, Dogecoin or Monero, and then underneath you have an operating system. The problem is kind of that this operating system is very complex because it has to handle graphics rendering and so on and so forth. And chances are high that it gets compromised. And if it gets compromised, all your funds are gone. And so with trust zone, you could basically have a second operating system separated from your normal one that handles all the important stuff like firmware update, key store attestation and so on and reduces your tech surface. And the reason I actually looked at trust zone M is we got a lot of requests for consulting on trust zone M. So basically after our talk last year, a lot of companies reached out to us and said, okay, we want to do this, but more securely. And a lot of them tried to use trust zone M for this. And so far there's been, as far as I know, little public research into trust zone M and whether it's protected against certain types of attacks. And we also have companies that start using them as secure chips. So for example, in the automotive industry, I know somebody who was thinking about putting them into car keys. I know about some people in the payment industry evaluating this and a set hardware wallets. And one of the terms that come up again and again is this is a secure chip. But I mean, what is a secure chip? Without a threat model, there's no such thing as a secure chip because there are so many attacks and you need to have a threat model to understand what are you actually protecting against. So for example, a chip might have software features or hardware features that make the software more secure such as NX bit and so on and so forth. And on the other hand, we have hardware attacks. For example, debug ports, side channel attacks and fault injection. And often the description of a chip doesn't really tell you what it's protecting you against and often I would even say it's misleading in some cases. And so you will see, oh, this is a secure chip and you ask marketing and they say, yeah, it has the most modern security features but it doesn't really specify whether they are, for example, protecting against fault injection attacks or whether they consider this out of scope. In this talk, we will exclusively look at hardware attacks and more specifically, we will look at fault injection attacks on Trust on M. And so all of the attacks we're gonna see are local to the device only. You need to have it in your hands and there's no chance normally of remotely exploiting them. Yeah, so this will be our agenda. We will start with a short introduction of Trust on M which will have a lot of theory on like memory layout and so on. We will talk a bit about the fault injection setup and then we will start attacking real chips these three as you will see. So on a Cortex M processor, you have a flat memory map. You don't have a memory management unit and all your peripherals, your flash, your RAM, it's all mapped to a certain address in memory. And Trust on M allows you to partition your flash or your RAM into secure and non-secure parts. And so for example, you could have a tiny secure area because your secure code is very small and a big non-secure area. The same is true for RAM and also for the peripherals. So for example, if you have a display and a crypto engine and so on, you can decide whether these peripherals should be secure or non-secure. And so let's talk about these two security states, secure and non-secure. Well, if you have code running in a secure flash or you have secure code running, it can call anywhere into the non-secure world. It's basically the highest privilege level you can have and so there's no protection there. However, the opposite, if we try to go from the non-secure world into the secure world would be insecure because for example, you could jump to the parts of the code that are behind certain protections and so on. And so that's why if you try to jump from non-secure code into secure code, it will cause an exception. And to handle that, there's a third memory state which is called non-secure callable and as the name applies, basically your non-secure code can call into the non-secure callable code. More specifically, it can only call to non-secure callable code addresses where there's an SG instruction which stands for secure gateway. And the idea behind the secure gateway is that if you have a non-secure kernel running, you probably also have a secure kernel running and somehow this secure kernel will expose certain system calls, for example. And so we want to somehow call from the non-secure kernel into these system calls. But as I've just mentioned, we can't do that because this will unfortunately cause an exception. And so the way this is handled on trust zone M is that you create so-called secure gateway veneer functions. These are very short functions in the non-secure callable area. And so if we want, for example, to call the load key system call, we would call the load key veneer function, which in turn would call the real load key function. And these veneer functions are super short. So if you look at a disassembly of them, it's like two instructions. It's a secure gateway instruction and then a branch instruction towards your real function. And so if we combine this, we end up with this diagram. Secure can call into non-secure, non-secure can call into NSC and NSC can call into the secure world. But how do we manage these memory states? How do we know what security state does an address have? And so for this in trust zone M, we use something called attribution units. And by default, there are two attribution units available. The first one is the SAU, the security attribution unit, which is standard across chips. It's basically defined by ARM, how you use this. And then there's the IDAU, the implementation defined attribution unit, which is basically custom to the silicon vendor, but can also be the same across several chips. And to get the security state of an address, the security attribution of both the SAU and the IDAU are combined. And whichever one has the higher privilege level will basically win. And so let's say our SAU says this address is secure and our IDAU says this address is non-secure. The SAU wins because it's the highest privilege level and basically our address will be considered secure. This is a short table. If both the SAU and the IDAU agree, we will be non-secure. If both say, hey, this is secure, it will be secure. However, if they disagree and the SAU says, hey, this address is secure, the IDAU says it's non-secure, it will still be secure because secure is the higher privilege level. The opposite is true. And even with non-secure callable, secure is more privileged than NSC, and so a secure will win. But if we mix NS and NSC, we get non-secure callable. Okay, my initial hypothesis when I read all of this was if we break or disable the attribution units, we probably break the security. And so to break these, we have to understand them. And so let's look at the SAU, the security attribution unit. It's standardized by ARM, it's not available on all ships, and it basically allows you to create memory regions with different security states. So for example, if the SAU is turned off, everything will be considered secure. And if we turn it on, but no regions are configured, still everything will be secure. We can then go and add, for example, address ranges and make them NSC or non-secure and so on. And this is done very, very easily. You basically have these five registers. You have the SAU control register where you basically can turn it on and off. You have the SAU type, which gives you the number of supported regions on your platform, because this can be different across different ships. And then we have the region number register, which you use to select the region you want to configure, and then you set the base address and the limit address, and that's basically it. So for example, if we want to set region zero, we simply set the RNR register to zero, then we set the base address to 0x1000, we set the limit address to 0x1fe0, which is identical to 1fff, because there are some other bits behind there that we don't care about right now. And then we turn on the security attribution unit, and now our memory range is marked as secure. If you want to create a second region, we simply change RNR to, for example, one. Again, insert some nice addresses, turn on the SAU, and we have a second region, this time from 4,000 to 5fff. So to summarize, we have three memory security states. We have S, secure, and we have NSC, non-secure callable, and we have NS, non-secure. We also have the two attribution units, the SAU, standard by ARM, and the IDAU, which is potentially custom. We will use SAU and IDAU a lot, so this was very important. Cool, let's talk about fault injection. So as I've mentioned, we want to use fault injection to compromise trust zone, and the idea behind fault injection, or as it's also called glitching, is to introduce faults into your chip. So, for example, you cut the power for a very short amount of time, or you change the period of the clock signal, or even you could go and inject electromagnetic shocks in your chip. You can also shoot at it with a laser, and so on and so forth. Lots of ways to do this. And the goal of this is to cause undefined behavior. And in this talk, we will specifically look at something called voltage glitching. And so the idea behind voltage glitching is that we cut the power to the chip for a very, very short amount of time at a very precisely timed moment. And this will cause some interesting behavior. So basically, if you would look at this on an oscilloscope, we would basically have a stable voltage, stable voltage, stable voltage, and then suddenly it drops and immediately returns. And this drop will only be a couple of nanoseconds long. And so for example, you can have glitches that are 10 nanoseconds long, or 15 nanoseconds long, and so on, depends on your chip. And yeah, and this allows you to do different things. So for example, a glitch can allow you to skip instructions. It can corrupt flash reads or flash writes. It can corrupt memory register or register reads and writes. And skipping instructions for me is always the most interesting one because it allows you to directly go from disassembly to understanding what you can potentially jump over. So for example, if we have some code, this would be a basic firmware bootup code. We have an initialized device function. Then we have a function that basically verifies the firmware that's in flash. And then we have this Boolean check whether our firmware is valid. And now if we glitch at just the right time, we might be able to glitch over this check and boot our potentially compromised firmware, which is super nice. So how does this relate to a trust zone? Well, if we managed to glitch over enabled trust zone, we might be able to break trust zone. So how do you actually do this? Well, we need something to wait for a certain delay and generate a pulse at just the right time with very high precision. We're talking about nanoseconds here. And we also need something to drop the power to the target. And so if you need precise timing and so on, what works very well is an FPGA. And so for example, the code that I'll release as part of this all runs on the latest iStick, which is roughly 30 bucks, and you need a cheap MOSFET. And so together this is like $31 of equipment. And on a setup site, this looks something like this. You would have your FPGA, which has a trigger input. And so for example, if you want to glitch something during the boot up of a chip, you could connect this to the reset line of the chip. And then we have an output for the glitch pulse. And then if we hook this all up, we basically have our power supply to the chip, run over a MOSFET, and then if the glitch pulse goes high, we drop the power to ground and the chip doesn't get power for a couple of nanoseconds. Let's talk about this power supply because a chip has a lot of different things inside of it. So for example, a microcontroller has a CPU core, we have a Wi-Fi peripheral, we have GPIO, we might have Bluetooth and so on. And often these peripherals run at different voltages. And so while our microcontroller might just have a 3.3 volt input, internally there are a lot of different voltages at play. And the way these voltages are generated often is using in-chip regulators. And basically these regulators connect to the 3.3 voltage in and then generate the different voltages for the CPU core and so on. But what's nice is that on a lot of chips, there are behind the core regulator, so-called bypass capacitors. And these external capacitors are basically there to stabilize the voltage because regulators tend to have a very noisy output and you use the capacitor to make it more smooth. But if you look at this, this also gives us direct access to the CPU core power supply. And so if we just take a heat gun and remove the capacitor, we actually kind of change the pinout of the processor because now we have a 3.3 voltage in, we have a point to input the core voltage and we have ground. So we basically gained direct access to the internal CPU core voltage rails. The only problem is these capacitors are there for a reason. And so if we remove them, your chip might stop working. But very easy solution, you just hook up a power supply to it, set it to 1.2 volts or whatever, and then suddenly it works. And this also allows you to glitch very easily. You just glitch on your power rail towards the chip. And so this is our current setup. So we have the lattice I stick. We also use a multiplexer as an analog switch to cut the power to the entire device if we want to reboot everything. We have a MOSFET and we have a power supply. Now, hooking this all up on a breadboard is fun the first time. It's okay the second time, but the third time it begins to really, really suck. And as soon as something breaks with like a hundred jumper wires on your desk, the only way to debug is to start over. And so that's why I decided to design a small hardware platform that combines all of these things. So it has an FPGA on it, it has analog input, and it has a lot of glitch circuitry. And it's called the Mark 11. If you've read William Gibson, you might know where this is from. And it contains a Lattice I40 which has a fully open source tool chain thanks to Clifford Wolf and so on. And this allows us to very, very quickly develop new triggers, develop new glitch code and so on. And it makes compilation and everything really, really fast. It also comes with three integrated power supplies. So we have a 1.2 volt power supply, 3.35 volts and so on. And you can use it for a DPA. And this is based around some existing devices. So for example, the FPGA part is based on the one bit squared ice breaker. The analog front end thanks to Colin O'Flynn is based on the chip whisperer Nano. And then the glitch circuit is basically what we've been using on breadboards for quite a while, just combined on a single device. And so unfortunately, as always with hardware, production takes longer than you might assume. But if you drop me a message on Twitter, I'm happy to send you a PCB as soon as they work well. And the bomb is around 50 bucks. Cool. Now that we are ready to actually attack chips, let's look at an example. So the very first chip that I encountered that you trust on M was the microchip SEM-L11. And so this chip was released in June 2018. And it's kind of a small slow chip. It runs at 32 megahertz. It has up to 64 kilobytes of flash and 16 kilobytes of SRAM. But it's super cheap. It's like $1.80 at quantity one. And so it's really nice, really affordable. And we had people come up to us and suggest, hey, I want to build a TPM on top of this or I want to build a hardware wallet on top of this and so on and so forth. And if we look at the website of this chip, it has a lot of security in it. So it's the best contribution to IoT security winner of 2018. And if you just type secure into the word search, you get like 57 hits. So this chip is 57 secure. And even on the website itself, you have like chip level security. And then if you look at the further descriptions, you have robust chip level security, include chip level temper resistance, active shield, protects against physical attacks and resists micro probing attacks. And even in the data sheet, where I got really worried was because I said I do a lot with a core voltage, it has a brownout detector that has been calibrated in production and must not be changed and so on. Yeah, to be fair, when I talked to my micro chip, they mentioned that they absolutely want to communicate that this chip is not hardened against hardware attacks. But I can see how somebody who looks at this would get the wrong impression given all the terms and so on. Anyway, so let's talk about the trust zone in this chip. So the sem11 does not have a security attribution unit. Instead, it only has the implementation defined attribution unit. And the configuration for this implementation defined attribution unit is stored in the user role, which is basically the configuration flash. It's also called fuses in the data sheet sometimes, but it's really, I think it's flash based. I haven't checked, but I'm pretty sure it is because you can read it, write it, change it and so on. And then the IDAU, once you've configured it, will be configured by the boot ROM during the start of the chip. And the idea behind the IDAU is that all your flash is partitioned into two parts. You have the bootloader part and the application part. And both of these can be split into secure, non-secure callable and non-secure. So you can have a bootloader, a secure and a non-secure one, and you can have an application, a secure and a non-secure one. And the size of these regions is controlled by these five registers. And for example, if we want to change our non-secure application to be bigger and make our secure application a bit smaller, we just fiddle with these registers and the sizes will adjust. And the same with the bootloader. So this is pretty simple. How do we attack it? My goal initially was I want to somehow read data from the secure world while running code in the non-secure world. So jump the security gap. My code in non-secure should be able to, for example, extract keys from the secure world. And my attack path for that was, well, I glitched the boot ROM code that loads the IDAU configuration. But before we can actually do this, we need to understand is this chip actually glitchable and is it susceptible to glitches or do we immediately get thrown out? And so I used a very simple setup where I just had a firmware and tried to glitch out of a loop and enable an LED. And I had success in less than five minutes and super stable glitches almost immediately. Like when I saw this, I was 100% sure that I messed up my setup or that the compiler optimized out my loop or that I did something wrong because I never glitched the chip in five minutes. And so this was pretty awesome, but I also spent another two hours verifying my setup. So, okay, cool, we know the chip is glitchable. So let's glitch it. What do we glitch? Well, if we think about it, somewhere during the boot ROM, these registers are read from Flash and then some hardware is somehow configured. We don't know how because we can't dump the boot ROM, we don't know what's going on in the chip and the datasheet has a lot of pages and I'm a millennial, so I read what I have to read and that's it. But my basic ideas, if we somehow managed to glitch the point where it tries to read the value of the AS register, we might be able to set it to zero because most chip peripherals will initialize to zero. And if we glitch over the instruction that reads AS, maybe we can make our non-secure application bigger so that actually we can read the CQ application data because now it's considered non-secure. But problem one, the boot ROM is not dumpable, so we cannot just disassemble it and figure out when does it roughly do this. And the problem two is that we don't know when exactly this reader cures and our glitch needs to be instruction precise. We need to hit just the right instruction to make this work. And the solution is brute force, but I mean like nobody has time for that, right? So if the chip boots for two milliseconds, that's a long range we have to search for glitches. And so very easy solution, power analysis. And it turns out that for example, RISCure has done this before where basically they try to figure out where in time a JTEC lock is set by comparing the power consumption. And so the idea is we basically write different values to the AS register, then we collect a lot of power traces, and then we look for the differences. And this is relatively simple to do if you have a chip whisperer. So this was my rough setup, so we just have the chip whisperer light, we have a breakout with the chip we want to attack and a programmer. And then we basically collect a couple of traces, and in my case even just 20 traces are enough, which takes, I don't know, like half a second to run. And if you have 20 traces in unsecure mode, 20 traces in secure mode, and you compare them, you can see that there are clear differences in the power consumption starting at a certain point. And so I wrote a script that does some more statistics on it and so on. And that basically told me the best glitch candidate starts at 2.18 milliseconds. And this needs to be so precise because as I said, we're in the nanosecond range. And so we wanna make sure that we have the right point in time. Now how do you actually configure, how do you build this setup where basically you get a success indication once you broke this? For this I needed to write a firmware that basically attempts to read secure data, and then if it's successful, it enables the GPIO, and if it fails it does nothing and I just reset and try again. And so I knew my rough delay and I was triggering of the reset of the chip, then I just tried any delay after it and tried different glitch pulls length and so on, and eventually I had a success. And these glitchers, you will see with the glitcher where we released a while back, is super easy to write because all you have is like 20 lines of Python. You basically set up a loop, delay from, delay to, you set up the pulls length, you iterate over a range of pulses, and then in this case you just check whether your GPIO is high or low. That's all it takes. And then once you have this running in a stable fashion, it's amazing how fast it works. So this is now a recorded video of a live glitch, of a real glitch basically, and you can see we have like 20 attempts per second and after a couple of seconds we actually get a success indication. We just broke a chip, sweet. But one thing, I moved to a part of Germany to the very south, that's called the Schwabenland. And I mean 60 bucks, we are known to be very cheap and 60 bucks translates to like six beers at Oktoberfest. Just to convert this to the local currency that's like 60 Klumate, unacceptable. We need to go cheaper, much cheaper. And so, what if we take a chip that's 57 secure and we try to break it with the smallest chip? And so this is an 80 tiny which costs like, I don't know, a euro or two euro. We combine it with a MOSFET to keep the comparison that's roughly three Klumate. And we hook it all up on a jumper board and turns out this works, like you can have a relatively stable glitch with like 120 lines of assembly running on the 80 tiny and this will glitch your chip successfully and can break trust zone on the sem 11. The problem is chips are very complex and it's always very hard to do an attack on a chip that you configured yourself because as you will see, chances are very high that you messed up the configuration and for example, missed the security bit, forgot to set something and so on and so forth. But luckily, in the case of the sem 11, there's a version of this chip which is already configured and only ships in non-secure mode. And so this is called the sem 11 KPH. And so it comes pre provisioned with a key and it comes pre provisioned with a trust execution environment already loaded into the secure part of the chips and it chips completely secured and the customer can write and debug non-secure code only and also you can download the SDK for it and write your own trustlets and so on. But I couldn't because it requires you to agree to their terms and conditions. So which exclude reverse engineering so no chance, unfortunately. But anyway, this is the perfect example to test our attack. You can buy these chips on the key and then try to break into the secure world because these ships are hopefully decently secured and have everything set up and so on. And yeah, so this was the setup. We designed our own breakout board for the sem 11 which makes it a bit more accessible, has JTAC and has no capacitors in the way so you get access to all the core voltages and so on. And you have the FPGA on the top left, the super cheap 20 bucks power supply and a programmer. And then we just implemented a simple function that uses OpenOCD to try to read an address that we normally can't read. So we basically, we glitch, then we start OpenOCD which uses the JTAC adapter to try to read secure memory. And so I hooked it all up, wrote a nice script and let it rip and so after a while or well a couple of seconds, immediately again got a successful attack on the chip and more and more and you can see just how stable you can get these glitches and how well you can attack this. Yeah, so sweet, hacked. We can compromise the root of trust and the trust execution environment and this is perfect for supply chain attacks, right? Because if you can compromise a part of the chip that the customer will not be able to access, he will never find you. But the problem with supply chain attacks is they're pretty hard to scale and they're only for sophisticated actors normally and far too expensive is what most people will tell you. Except if you hack the distributor. And so as I guess last year or this year, I don't know, I actually found a vulnerability on DigiKey which allowed me to access any invoice on DigiKey including the credentials you need to actually change the invoice. And so basically the buck is that they did not check when you basically requested an invoice, they did not check whether you actually had permission to access it and you have the web access ID on top and the invoice number and that's all you need to call DigiKey and change the delivery basically. And so this also is all data that you need to reroute the shipment. I just closed this, it's fixed, it's been fixed again afterwards and now hopefully this should be fine so I feel good to talk about it. And so let's walk through the scenario. So we have Eve and we have DigiKey and Eve builds this new super sophisticated IOT toilet and she needs a secret chip. So she goes to DigiKey and orders some SEM L11 KPHs and her Mallory. Mallory scans all new invoices on DigiKey and as soon as somebody orders a SEM L11, they talk to DigiKey via the API or via a phone call to change the delivery address and because you know who the chips are going to, you can actually target this very, very well. So now the chips get delivered to Mallory, Mallory back towards the chips and then sends the back door chips to Eve who's not the wiser because it's the same career, it's the same, it looks the same. You have to be very, very mindful of these types of attack to actually recognize them. And even if they open the chips and they open the package and they try the chip, they scan everything they can scan, the back door will be in a part of the chip that they cannot access. And so we just supply chain attacked whoever using a UPS envelope basically. So yeah, interesting attack vector. So I talked to Microchip and it's been great. They've been super nice. It was really a pleasure. I also talked to Trustonic who were very open to this and wanted to understand it and so it was great. And they explicitly state that this chip only protects against software attacks. While it has some hardware features like temporary system RAM, it is not built to withstand fault injection attacks. And even if you compare now different revisions of the data sheet, you can see that as some data sheets, the early ones mentioned some fault injection resistance and it's now gone from the data sheet and they are also asking for feedback on making it more clear what this chip protects against which I think is a noble goal because we all know marketing versus technicians is always an interesting fight, let's say. Cool, first chip broken, time for the next one, right? So the next chip I looked at was the most, the Nuvo Ton, Nu Micro M2351 rolls of the tongue. It's a Cortex M23 processor. It has trust on M and I was super excited because this finally has an SAU, a security attribution unit and an IDAU. And also I talked to their marketing. It explicitly protects against fault injection. So that's awesome. I was excited, let's see how that turns out. Let's briefly talk about the trust zone in the Nuvo Ton chip. So as I've mentioned before, the SAU if it's turned off or turned on without regions will be to fully secure. And no matter what the IDAU is, the most privileged level always wins. And so if our entire security attribution unit is secure, our final security state will also be secure. And so if we now add some small regions, the final state will also have the small non-secure regions. And I mean, I saw this, looked at how this code works and you can see that at the very bottom, SAU control is set to one. Simple, right? We glitch over the SAU enabling and all our code will be secure and we'll just run our code in secure mode. No problem is what I thought. And so basically the secure bootloader starts execution of non-secure code. We disable the SAU by glitching over the instruction and now everything is secure. So our code runs in a secure world. It's easy except read the fucking manual. So it turns out these thousands of pages of documentation actually contain useful information. And you need a special instruction to transition from secure to non-secure state which is called BLXNS which stands for branch optionally linked and exchange to non-secure. This is exactly made to prevent this. It prevents accidentally jumping into non-secure code. It will cause a secure fault if you try to do it. And what's interesting is that even if you use this instruction, it will not always transition the state. It depends on the last bit in the destination address whether the state is transitioned. And the way the bootloader will actually get its addresses it jumps to is from what's called the reset table which is basically where your reset handlers are, where your stack pointer, your initial stack pointer is and so on. And you will notice that the last bit is always set. And if the last bit is set, it will jump to secure code. So somehow they managed to branch to this address and run it into non-secure. And so how do they do this? They use an explicit bit clear instruction. What do we know about instructions? We can glitch over them. And so basically we can with two glitches we can glitch over the SIU control enable. Now our entire memory is secure. And then we glitch over the bit clear instruction and then branch linked X non-secure again rolls off the tongue will run secure code. And now our normal world code is running in secure mode. The problem is works but it's very hard to get stable. So I mean this was, I somehow got it working but it was not very stable and it was a big pain to actually make use of. So I wanted a different vulnerability and I read up on the implementation defined attribution unit of the M2351 and it turns out that each flash, RAM, peripheral and so on is mapped twice into memory. And so basically once as secure as the address 0x2000 and once as non-secure at the address 0x3000. And so you have the flash twice and you have the RAM twice. This is super important. This is the same memory. And so I came up with an attack that I call cruel R bar because a vulnerability basically doesn't exist if it doesn't have a fancy name. And the basic point of this is that the security of the system relies on the region configuration of the SIU. What if we glitched this initialization combined with this IDAU layout? Again, the IDAU mirrors the memory as it wants as secure and wants as non-secure. Now let's say we have at the very bottom of our flash we have a secret which is in the secure area. It will also be in the mirror of this memory but again because our SIU configuration is fine, it will not be accessible by the non-secure region. However, the start of this non-secure area is configured by the R bar register. And so maybe if we glitch this R bar being set, we can increase the size of the non-secure area. And if you check the ARM documentation on the R bar register, the reset value state of this register is unknown. So unfortunately it doesn't just say zero but I tried this on all chips I had access to and it's zero on all chips I tested. And so now what we can do is we glitch over this R bar and now our final security state will be bigger and our secure code is still running in the bottom half but then the jump into non-secure will also give us access to the secret. And it works. We get a fully stable glitch, takes roughly 30 seconds to bypass it. I should mention that this is what I think happens. All I know is that I inject the glitch and I can read the secret. I cannot tell you exactly what happens but this is the best interpretation I have so far. So woohoo, we have an attack with a cool name. And so I looked at another chip called the NXP-LPC-55S69 and this one has two Cortex-M33 cores, one of which has truston M. The IDAU and the overall truston layout seem to be very similar to the new micro and I got the dual glitch attack working and also the crow R bar attack working and the vendor response was amazing. Like holy crap, they called me and wanted to fully understand it, they reproduced it, they got me on the phone with an expert and the expert was super nice but what the set came down to was RT-FM. But again, this is a long document but it turns out that the example code did not enable a certain security feature and this security feature is a helpfully named miscellaneous control register basically which stands for secure control register, obviously. And this register has a bit, if you set it, it enables secure checking and if I read just a couple of sentences further when I read about the truston on the chip I would have actually seen this but millennial, sorry. Yeah and so what this enables is called the memory protection checkers and this is an additional memory security check that gives you finer control over the memory layout and so it basically checks if the attribution unit security state is identical with the memory protection checker security state and so for example if our attack code tries to access memory the MPC will check whether this was really a valid request so to say and stop you if you are unlucky as I was. But turns out it's glitchable but it's much, much harder to glitch and you need multiple glitches and the vendor response was awesome. They also, as I heard, working on improving the documentation for this so yeah. Super cool but still like it's not a full protection against glitching but it gives you certain security and I think that's pretty awesome. Before we finish is everything broken? No, these chips are not insecure. They are not protected against a very specific attack scenario and align the chips that you wanna use with your threat model. If fault injection is part of your threat model so for example you're building a car key maybe you should protect against glitching. If you're building a hardware wallet definitely you should protect against glitching. Thank you. Also by the way if you want to play with some awesome fault injection equipment I have an EMF eye glitcher with me and so on so just hit me up on Twitter and I'm happy to show it to you. So thanks a lot. Thank you very much Thomas. We do have an awesome 15 minutes for Q and A. So if you line up we have three microphones. Microphone number three actually has an induction loop so if you're hearing impaired and have a suitable device you can go to microphone three and actually hear the answer. And we're starting off with our signal angel with questions from the internet. Hello internet. Hello. Are you aware of the STS cortex M4 firewall and can your research be somehow related to it or maybe do you have plans to explore it in the future? I, so yes I'm very aware of the STM32F4. If you watched our talk last year at CCC called wallet.fail we actually exploited the sister chip, the STM32F2. The F4 has this strange firewall thing which feels very similar to trustonem. However, I cannot yet share any research related to that chip. Unfortunately, sorry. Thank you. Microphone number one please. I'm just thought, hello. I'm just wondering have you tried to replicate this attack on multi-core CPUs with higher frequency such like two gigahertz and or if not how would you go about that? So I have not because there are no truston M chips with this frequency. However, people have done it on mobile phones and other equipment. So for example, yeah there are a lot of materials on glitching higher frequency stuff but yeah it will get expensive really quickly because a scope the way you can even see a two gigahertz clock. That's a nice car on a oscilloscope. Microphone number two please. Thank you for your talk. Is the firmware functionality to go from non-secure to secure area? Are there same sets defined functionalities or are there preparatory libraries from NXP or other? So the veneer stuff is standard and you will find ARM documents basically recommending you to do this but all the tool chains for example the one for the SEM11 will generate the veneers for you. And so I have to be honest, I have not looked at how exactly they are generated. However, I did some rust stuff to play around with it and yeah it's relatively simple for the tool chain and it's standard. The signal angel is signaling. Yeah that's another question from the internet but for me and I wanted to know how important is the hardware security in comparison to the software security because you cannot hack these devices without having physical access to them except of this supply chain attack. Exactly and that depends on your threat model. So it's basically if you build a hardware wallet you want to have hardware protection because somebody can steal it potentially very easily and if you for example look at your phone you probably maybe don't want to have anyone at customs be able to immediately break into your phone and that's another point where hardware security is very important with the car key it's the same. If you rent a car you hopefully the car rental company doesn't want you to copy the key and interestingly probably one of the most protected things in your home is your printer cartridge because I can tell you that the vendor invests a lot of money into you not being able to clone the printer cartridge and so there are a lot of cases where it's maybe not the user who wants to protect against hardware attacks but it's the vendor who wants to protect against it. Microphone number one please. So thank you again for the amazing talk. Thank you. You mentioned higher order attacks I think twice and for the second chip you actually said you broke it with two glitches, two exploitable glitches. Yes. So what did you do to reduce the search space or did you just search over the entire space? So the nice thing about these chips is that you can actually you can if you have a security attribution unit you can decide when you turn it on because you can just, so I had a GPIO go up then I enabled the SU and then I had my search space very small because I knew it would be just after I pulled up the GPIO and so I was able to very precisely time where I glitch and I was able because I wrote the code basically that does it, I could almost count on the oscilloscope which instruction I'm hitting. All right, thank you. Next question from microphone number two please. Yeah, thank you for the talk. I was just wondering if the vendor was to include the capacitor directly on the die, how fixed would you consider it to be? So against voltage glitching it might help, it depends but for example on a recent chip we just used negative voltage to suck out the power from the capacitor and also you will have EMFI glitching as a possibility and EMFI glitching is awesome because you don't even have to solder you just basically put a small coil on top of your chip and inject the voltage directly into it behind any of the capacitors and so on. So it helps but it's not a, often it's not done for security reasons, let's say. Next question again from our signal angel. Did you get to use your own custom hardware to help you? I partially, the part that worked is the summary. Microphone number one please. Hi, thanks for the interesting talk. All these vendors pretty much said this sort of attack is sort of not really in scope for what they're doing. Are you aware of anyone like in this sort of category of chip actually doing anything against glitching attacks? Not in this category but there are secure elements that explicitly protect against it. A big problem with researching those is that it's also to a large degree security by NDA, at least for me, because I have no idea what's going on. I can't buy one to play around with it and so I can't tell you how good these are but I know from some friends that there are some chips that are very good at protecting against glitches and apparently the term you need to look for is called Glitch Monitor and if you see that in the data sheet that tells you that they at least thought about it. Microphone number two please. So what about a brownout detection edit microchip? Say why it didn't catch your glitching attempts? It's not made to glitch, to catch glitching attacks, basically a brownout detector is mainly there to keep your chip stable and so for example if your supply voltage drops you want to make sure that you notice and don't accidentally glitch yourself. So for example if it's running on a battery and your battery goes empty you want your chip to run stable, stable, stable off and that's the idea behind a brownout detector is my understanding but yeah they are not made to be fast enough to catch glitching attacks. Do we have any more questions from the hall? Yes. Yes, where? Thank you for your amazing talk. You have shown that it gets very complicated if you have two consecutive glitches so wouldn't it be an easy protection to just do the stuff twice or three times and maybe randomize it? Would you consider this then impossible to be glitched? So adding randomization to the point in time where you enable it helps but then you can trigger off the power consumption and so on and I should add I only try to trigger once and then use just a simple delay but in theory if you do it twice you could also glitch on the power consumption signature and so on so it might help but somebody very motivated will still be able to do it probably. Okay, we have another question from the internet. Is there a mitigation for such attack that I can do on PCB level or it can be addressed only on chip level? Only on chip level because if you have a heat gun you just pull the chip off and do it in a socket or if you do EMFI glitching you don't even have to touch the chip you just go over it with the coil and inject directly into the chip so the chip needs to be secured against this type of stuff or you can add a temper protection case around your chips so yeah. Another question from microphone number one. So I was wondering if you've heard anything or nothing about the STM32L5 series? I've heard a lot, I've seen nothing so yes I've heard about it but it doesn't chip yet as far as I know. We are all eagerly awaiting it. Thank you. Microphone number two please. Very good talk, thank you. Will you release all the hardware design of the board on the scripts? Is there anything already available even if I understood it's not all finished? Yes, so on chip.fail there are .fail domains, it's awesome. Chip.fail has the source code to our glitcher. I've also ported it to the lattice and I need to push that hopefully in the next few days but then all the hardware will be open sourced also because it's based on open source hardware and yeah, not planning to make any money or anything using it, it's just to make life easier. Microphone number two please. So you said already you don't really know what happens at the exact moment of the glitch and you were lucky that you skipped an instruction maybe, do you have, yes, a feeling what is happening inside the chip at the moment of the glitch? So I asked this precise question what exactly happens to multiple people? I got multiple answers but basically my understanding is that you basically pull the voltage that it needs to set for example a register but it's absolutely out of my domain to give an educated comment on this. I'm a breaker unfortunately not a maker when it comes to chips. Microphone number two please. Okay, thank you. You said a lot of the chip attack, can you tell something about JTEC attacks? So I just have a connection to JTEC? Yeah, so for example the attack on the KPH version of the chip was basically a JTEC attack. I used JTEC to read out the chip but I did have JTEC in normal world. However, it's possible on a lot of chips to re-enable JTEC even if it's locked and for example again referencing last year's talk we were able to re-enable JTEC on the STM32F2 and I would assume something similar as possible on this chip as well. But I haven't tried. Are there any more questions? We still have a few minutes, I guess not. Well, a big warm round of applause for Thomas Horton.