 Down with the glasses, Lars and Christian. All of them work for a spin-off of the Rural University Bochum. Please welcome them with a huge round of applause. Yeah, so thank you very much. Thank you for being here. I'm Christian. The toy one is Lars. The muscle man is David. So, yeah, we are coming from Bochum. It's a nice city. It's like two and a half hours away from Leipzig. It's very close, more or less. Feisig is a spin-off of the university. We are now like two and a half years old, and this is actually the very first time that we really present insights of our technology. So the research team behind the enclosure path technology is not just V3. The major people are also Christoph Peier. He's professor for the chair of embedded security at Rural University Bochum, as well as Johannes Tobisch. He's also here, but he's super shy, so he don't want to be on set. That's a joke. Three people are enough. And ourselves. So you maybe, of course, know Bochum because of the Rural University, specifically because of the Horst Gertz Institute for IT Security, but you maybe also know Bochum because of Nokia and Opel, but because we don't have Nokia and Opel anymore, we also created the Feisig, to get all the people... This is not true. It's actually true that we have 26 professors at the HGI. We have more than 100 alumni, and we are one of 15 startups. Some of them are already no startups anymore, and this is a nice ecosystem for us and for IT Security. In Bochum, there will also be the Max Planck Institute for Cybersecurity and Digital Privacy, which is pretty nice. We have the European Competence Center for IT Security. We have the Flux Fingers, they're also here. And I guess Bochum is good in publishing real-world attacks. And real-world attacks are also our background. Specifically, the real-world attacks on embedded systems. And to get an idea of what we are actually doing when we are not presenting like novel technologies like this, as I said, implementation attacks. For the people who like statistics, physical, unsecure IoT devices are actually the major concern for the IoT, so for the Internet of Things, for all the connected devices we will have in the near future, which is a real issue. However, who likes concrete examples, of course we have a lot of different attacks, like mathematical attacks, social engineering, but specifically implementation attacks. And implementation attacks can again be classified by different families, like the active and passive ones. And the passive ones for like side-channel attacks where you use just the timing of the algorithms, electromagnetic emanations or power consumptions, and then maybe some simple analysis or even maybe some statistical analysis, you can easily extract the keys. I will show you some very raw examples later. And on the other side we have active attacks. This attacker has a device and can maybe inject faults, maybe with lasers, maybe with clock glitches, power glitches, or reverse engineers' entire device. And what is important, these attacks are independent of the algorithms or of the math. You can apply them to virtually all ciphers in the field. A little illustration of power or timing analysis using maybe pico-scope or an oscilloscope or injection faults in the chip. Concrete examples are given here. The credit comes or goes to our chair, to the chair of embedded security where my colleagues analyze these kind of attacks like a lot. And here you can actually see the calculation of an RSA algorithms. I'm not going into the detail, but they're doing some exponentiation and modular reductions. And to do it more efficient, you do reduce the complexity of a multiplication to a squaring if you can. And when you do this, then you think, okay, this is nice. It makes it efficient and faster. It's actually a problem when you just take a look on the power consumption because you can easily read out the private key. And this is possible when the attacker just has the device and can probe the device. A little bit more expensive attacks are fault injection attacks where you shoot with maybe a focused iron beam on the circuit while the system is actually in operation and then you create faults. And when you compare the results after a faulty operation, you compare it maybe with the correct operation and then you can also again easily attack such implementations. If you have more interest in this topic, my colleague Falk Schellenberg, he's also somewhere here. Can't see him. He just finished his PhD so we can call him Dr. Schellenberg, but he's not allowed to call himself Dr. Schellenberg right now. So he's in exactly the stage during the PhD. Anyway. Okay. And he's also an expert in doing hardware reverse engineering where you use an asset to open the chip and then analyze what's inside. Maybe you find weaknesses, maybe fuses or something you can use to read out keys or maybe completely reverse engineer synthesis. Another very interesting attack is the implementation of torsions, specifically hardware torsions into the devices and here's an example also of a colleague from Buchholmer and he implemented a hardware torsion into a high security USB stick. And this is also something we are targeting with the technology we are presenting. So with this knowledge in background, in your back-head, when you are a designer of an embedded system, you have to handle it. So this means you have to create systems. Maybe you remember the talk yesterday about smart home hacking. The guy said, okay, the company is all the time complaining that it's military-grade security. Maybe they use some primitives like AES or something, the military is using as well, but from the physical security point of view, there's actually no military-grade security in the market. Not at all. They are very far away from the stuff the military is actually doing. So, however, also in this space, like the physical security, we have to go closer into the area of really physically secure the devices. And then I would say a classical developer for IoT devices will have a very hard time to put all this stuff in the design process, like developing side-channel-resistant algorithms, design a system where you cannot probe data information between two chips. Let's say you have a security element on the device and also a classical microcontroller and you don't want to manipulate it, then you have to somehow secure the entire system and not just a single chip. So this is a huge problem. And for all these applications, the big question is how to actually protect systems like this from physical attacks, coming from real IoT devices like smart home systems or alarm systems or something like this up to financial systems like ATMs or maybe even satellites or other more critical applications up to really high security applications. And maybe you remember the vintage verification talk from last year from the CCC from our friends from Princeton University, actually from the Nuclear Future Labs. They have also some interesting use cases where this technology might be a nice fit. Okay. So at this point, you might be wondering, isn't there anything that exists already that protects our devices against temper and I will give an overview on what exists currently at the market. First of all, four different approaches to guaranteeing temper resilience exist being temper resistance in which temper is just made difficult for the attacker. Temper evidence where any intrusion attempt must be evident. Going on, temper detection makes sure that a temper is detected actively reported to the user or owner of a device. And lastly, the highest goal which we are trying to achieve with our solution is temper responsiveness in which countermeasures are automatically engaged once a temper is detected. There's one big standard that at least covers how a temper responsiveness should work in practice, which is NIST 5140-2 and in this standard, four increasing levels of entire temper security are given. And we are trying to reach the highest level which is level four and this is required for all highly sensitive environments that the US government has servers or appliances working in. So what does level four demand? First of all, any attack must be detected. Micro intrusions, environmental attacks where like a deep freeze is conducted and basically anything that somehow tempers with the physical integrity of the device. Secondly, breaches must zeroes all CSP. CSP is standing for critical security parameters. So your cryptographic materials, your sensitive data, all of this must be deleted in the case of a temper. Thirdly, the CSP must be separated from the main system which is something you might already know from red and black systems on hardware protection modules. And lastly, the whole setup must be engulfed in a complete temper detection and response envelope. We'll go into detail about that later. Sadly, no public benchmarks exist on which attacks are tried out to break around physical temper resistance. So maybe there are some which are classified. We can't access them at the moment though. So coming to temper resilience. This is something we will quite commonly see in everyday life like the potting of electronic components which is also sometimes done to protect against water damage. Or using totally secure one way screws. I mean, there's no way we could get around this, right? Well, we can. Obviously, we just need some proprietary tools or chip away at the electronic potting. So this is not really a solution. However, it just protects against vandalism in public places it's very cheap and it's widely used. But not what we want for our system. Secondly, there's temper evidence. This is something most of you will be familiar with as well if you try to repair your own phone but you are not allowed to do it. On the upper right hand you can see the typical warranty seals that are used in electronics. So you void your warranty once you remove it. The first big issue that came up where temper was the main concern was the Tylenol bottles. I don't know, does anybody of you know the story behind this? Two or three hands. There was a serial killer in the United States who poisoned these small containers and as a result they engulfed they started using these safety seals. But same as with the previous approach widely used, cheap and ineffective. There's a very nice talk on Defcon 19 which we highly recommend where they go into detail on how to easily circumvent these measures. One approach we want to point out because it was mentioned last year is shine bright like a glitter nail polish. You can very easily protect your laptop during shipping by just using glitter nail polish. First of all, you cover all screw holes of a laptop using stickers just off the shelf do what you want stickers and then you cover the brim of the sticker with nail polish. Then you take a high resolution picture of this nail polish and sign the picture with your private key and upload the photo. So the person receiving the laptop can make another photo and check whether the glitter particles are the same. Quite easy and very effective. However, this doesn't guarantee any higher level than temper evidence. Coming to temper detection. Now this is the first step where you really need to think about what you're doing because having a temper detection means you need some kind of sensor that is able to detect the opening or the tempering of the device. You can see on the top is a PCB that has a small photoelectric diode and as soon as the case is opened and the light shines onto the diode the system knows that it's being opened and probably tempered with. Several other methods exist as well such as switches that just trigger when you open the case and so on. The benefit of this is that you don't need complex APIs. You can just have a small switch that can be read out one way switch and you're done for. False positives do not destroy the critical security parameters because there's just a notification going out. Maybe the notification might be wrong but that doesn't destroy anything. Now that's what we want to achieve, temper responsiveness. Current solutions are mostly based on measures that are wrapped around the hardware security module because that's where temper is the biggest issue and if you try to get into the bus lanes of these devices you will destroy that mesh and the capacitance for example of these meshes is different that can be measured out by a deletion circuit and the deletion circuit then automatically deletes any information on this device. This is state of the art. This is used by almost all hardware modules but there's one big issue or two big issues. First of all, you need a battery. If the battery runs out as a last dying breath the system deletes itself which is of course not wanted if you just want to store it on a shelf for some months. Secondly, you can only protect a very tiny area because it's quite hard to engulf a whole system in this mesh due to the need for air and heat spreading. So, FIPS 140-2 is really hard to obtain. There are only three worldwide modules that fulfill this requirement and 14 modules have reached this requirement since I think 2005 or something. The constant need for power is of course troublesome and no off the shelf solution currently exists. So you can't get any modules that say hey, if you build this into your system you are FIPS 140-4, level 4 certified. This of course makes retrofitting existing machines hard. There are several use cases where it's just desirable to retrofit an old machine especially ATMs are highly expensive having the vault in the bottom of this ATM where the cash is stored. Moving it, reinstalling something that's a process you don't want to go through if you want to just plug in a new module that guarantees temper. So, now coming to where we are at or what we want to achieve. Currently, both software and hardware solutions exist that try to verify temper and our solution which is symbolized by this little red dot tries to go for a new approach that combines hard and software in a clever way while making it easy to redeploy existing devices after being fitted with our solution. Our solution is based on physical disorder meaning we want to exploit small effects that are curing during production. Some of you might know that electronic circuits have some small variations that are used to generate or to verify that the system has not been tempered with and we go into a similar direction. Usually physical disorder is something you don't want. You want everything that you produce to be exactly the same but you'll never be able to due to these microscopic effects that you can see up there. So, we're using physically unclonable functions. I'm sure some of you will have heard of them. Could you give me your hand? Okay, that's the majority. Nice. So, I'll keep it brief. You take a challenge that you throw into some kind of random system and you get a response out and thus you can build a good old challenge response pair. We'll focus on just having a weak path meaning we have only one challenge and one response but we are sure that our entropy we have in our physical environment is sufficient. The properties are, they are very easy to evaluate but very hard to predict. So, you need to be able to conduct an evaluation very quickly but we're not able to simulate the environment using advanced simulation tools and extract the path response. Secondly, they're easy to manufacture but hard to duplicate so even if you put this thing into a CRT machine and really scan every layer and try to 3D print it or something like that you won't be able to get the position you need. Now, to some mathematical aspects there exists the notation of algorithmic temper proof. Most security goals or most security games that are played in cryptography are based on having mathematical functions that you want to reduce your problem on and HEP extends this model by saying okay but what if we don't have a black box of crypto that we have an input and an output to but what if we can temper with all of the internals as well and the researchers had three items they wanted to have checked and then they say okay this is algorithmic temper proof. First one, there needs to exist some kind of secure hardware storage in which there cannot be is no possible way to read when the system has been tempered with. Secondly, the device must be able to self destroy itself and thirdly, there must be some kind of hardware that cannot be manipulated with by an attacker. It may be red but it must not be changed. And the first criteria has already been fulfilled by some colleagues from Heidelberg that presented on chess in Heidelberg in 2006. They used a small coating that they applied to an integrated circuit and they were able to apply sensors to the integrated circuit that measure the capacity of the coating. So they had 30 of these sensors over all all over the circuit board and then they shot a small hole using a gallium beam and created a 100 micrometer by 100 micrometer by 1.5 micrometer hole and they were able to successfully detect this and have this chip be unusable. This is not the resolution we are going for because most solutions we are aware of target the rough military requirement of 300 micrometer hole with a 300 micrometer diameter. This is what we are going to go for as well. So now we're coming to our key idea that Christian will present. Okay, perfect. So I guess we learned already a lot. We learned the concerns, the problems. We learned the state of the art solutions. We learned that really wrapping a system in a mesh which can actually detect 300 micrometer holes is kind of a goal we want to achieve. And this FIP standard 142 level 4 tells us that we have to do this as a complete envelope around the system. And right now this is only possible for kind of PCBs but not for really entire systems. And the idea we are using here are coming actually from a completely different area but touches like the concept of physical unclonable functions. So what we are using are actually electromagnetic waves specifically the propagation effects of waves when they touches surfaces in between the transmitter and the receiver and vice versa actually. So these physical effects are very complicated. So there are complex effects like classical maybe well known effects like reflection, diffraction, absorption, scattering, refraction and also some nonlinear effects. And later I will show you maybe a little bit more about these details. But this is the idea. And actually the channel can now be characterized by something called channel impulse response. So here you have an example, someone sending this function S from T and E from T is a function the receiver will receive and the channel is H from T and this is characterized by exactly the change of the signal. And this is so the source, let's say the source of entropy which extracts entropy coming from this physical disorder we are using. Channel state information is pretty much or contains pretty much the same information just in a different way. It's actually represented in the frequency domain and is in a specific semantic that it fits very well for equalizers and filters specifically on the receiver side but for modern system also on the transmitter side. So we are using now this technology called wireless physical layer security and applied in an enclosure. So this means we use electromagnetic waves and send them from inside of an enclosure in the enclosure, in the environment and estimate a fingerprint, so the channel response at one or maybe several different antennas. And here we have an example how such a fingerprint looks like just for the magnitude. There are also phase information but we keep this out maybe for simplicity. This enclosure can be in this case for example an ECU or a control module of a car but it can actually be more or less everything. So we did this for ATMs and for a couple of other systems we are actually not allowed to talk about and we can do this also for very more complex structures even if there are moving parts and even if there are temperature change then it will get more complicated but we skip this just to give you an idea how it actually works. And then a very important part of the algorithm is that we use this enclosure fingerprint to generate a cryptographically usable key with a security level of let's say 128 bit based on a statistic test or entropy estimator. So this is a general idea and if someone tries to temper the system by maybe drilling a small hole in the case he will automatically destroy the key and maybe a small spoiler. This is already the self-destruction mechanism David was talking about. So this is the core idea and now the question is what kind of wavelengths we use? So we use electromagnetic waves you maybe know that depending on the wavelengths you get kind of a bigger or higher resolution as higher the frequency is and as smaller the wavelength is and then you can get actually more information out of the environment. The propagation effects are also a function of the wavelengths and there are even more complicated parts but this is I guess like the good basics you have to understand. So which of the frequencies are well suited for us? So there's of course a very wide frequency range from very low like 3 kilohertz up to terahertz technology and even up to light frequency space. However we are using actually frequencies in the area between 300 megahertz and 60 gigahertz. So we have a wavelength of about one meter to let's say up to one centimeter. So now the major question is so why is this working? We actually know this kind of the rule of thumb that changes of electromagnetic wave propagations are somehow related to an object size with the size of the wavelength and even if we use 30 gigahertz we have a size of one centimeter and then maybe only one centimeter change we can achieve. So what is the idea here? Why is this actually working? So it's working because we are manipulating the antennas in a way that we extend the near field of the antennas and in this case we can actually go from the wavelength dependency down to a thousandth of a wavelength dependency and when we use let's say just ISM bands which are the only ones we are actually allowed to use due to the regulation we come for example for 300, 433 megahertz from 70 centimeters down to 0.7 millimeter resolution and when we use 5 gigahertz we can even go down to 6 micrometer resolution. And this is good. So what can we do now? So we can actually now achieve an algorithmic temper proof. Why? Because we generate a key, so the blue key here from the inside and this key can only be recovered from the inside and when the integrity of this enclosure has not been violated. So if nobody changed something from this and then we can actually achieve the goal of read proof hardware because the key is not stored digitally. The key is the environment and the physics. If you change the physics you destroy the key and therefore you also have the feature of self-destruction. And now you can use the key to encrypt stored data which are integrity protected in the device and this nice combination you achieve an algorithmic temper proof. And the interesting features here are actually it's a little bit more complicated but you don't have digital keys anymore stored in the device. Of course we have to make a difference if the device is offline and goes in the online state then it extracts a key of the environment and does some kind of self verification. There's another different algorithm working so we have to we have to be a little bit more we have to apply a little bit more complex mechanisms here but at the end of the day we don't have to store digital keys we have to take care of course when in the online case the system gets attacked, the key gets erased very fast from the RAM and then we of course have to think about freezing attacks and stuff like this however at the end we can actually somehow achieve the goal. Due to the mechanism themselves we also don't need a battery for deleting or raising or zeroization circuit because the data the cryptographic security parameters are all the time encrypted on the device if the attacker opens the device he can find encrypted data encrypted with the key he already destroyed because he changed the physics of the environment and that's it and we can show last we'll show you a very nice demonstration we just prepared for the C3 where we can actually retrofit very standard hardware in this case a single board computer using very deep enclosures so maybe I high level overview for a system maybe you have a host system maybe the computer in an ATM and you connect to the secure boot or maybe even to the TPM of this host system our FISAC testbed which consists of at least two antennas we estimate the channel between sender and receiver and to extract a response like the PATH response and then an algorithm or a mechanism to extract the key out of this fingerprint and this is the thing we can we need to add to the system of course sometimes we need to add also some kind of ceiling or enclosing material around the system maybe aluminum foil but this can also be some kind of small meshes like copper meshes for example which fulfill better air conditioning capabilities stuff like this and then still challenge is how to actually securely bring power and the data communication inside the system passing the enclosure material yeah so like Christian said now I am going to show how we can build a low cost proof of concept and we bought a demonstrator here so yeah we are leaving the theoretical part mostly and going to something more practice first of all we are going to use physical disorder like mentioned before for generating some enclosure and what we also need is radio enabled commodity hardware with at least two transmitters but there also can be more and as mentioned before we use aluminum foil we have here and we use this cool box it's a lunch box where you can put in noodles or something so the first proof of concept demonstrator is using a raspberry pi so it has connected two narrow band radios which are sending an 868 megahertz with two megahertz bandwidth and as enclosure we use smaller one of these boxes and aluminum foil but the problem is that the resolution is not so high so we build another one so demonstrator we bought here therefore we use an APU single board computer which is serving our protection module it's equipped with antennas 4 so we have a 2 by 2 special system, nano system so it brings us 40 megahertz bandwidth 5.5 gigahertz so there's more resolution we're just using for the enclosure these lunch box so yep, switch in the camera please so this is our APU here are the antennas 4 of them and so we put the APU inside the box placing the antennas somewhere and yeah putting in some more enclosure randomness with this aluminum foil, putting it just over the system and closing the box with this so we switch to the demonstrator we need to change the screens sorry sorry okay so let's start with the demonstration so first of all we have 3 GUIs for different demonstrations okay I don't get it of course very handy yeah okay let's try this so the first GUI is just showing a 3D modulation of what for special channels actually are measuring so that's how it looks like you can even really fancy move it so you see if we don't doing anything to the system it's relative constant and it's really typical looking for the whole time but what if we open the device when we open the device we see there's happening very much and if this we put this away so many things happens it doesn't look like before again and if we put it back here we still see it's not the same like before so we destroyed the enclosure the way how it was before very primitive attack well we can of course we think about an attacker which is more clever we bought these angry birds with us it's our needles with a tiny angry bird on it camera switch maybe yeah this one it's very evil so this is camera switch again please this needle or this bird is very angry and wants to probe maybe some buses in our system and wants to go into our system so we show what happens or how it looks like when this happens yeah so we have another demonstrator which is showing some or which is using some matrix so we can not just monitor it with our so the changes are persistent visible okay so first of all we are generating some reference values you're seeing now this shown is to clean the distance maybe some of you know this metric and now we are just comparing all measures which are coming from now on with the first measures so we see there's a little variance we are measuring but it's more or less very reliable so but what if this angry needle comes inside there are huge changes in the system even if I pull this angry bird again the systems are persistent it's very good visible yeah okay so then we have a third GUI which is also showing this in a good evaluation system so how we can generate a key of this because this were just some metrics where you can look at it visually and now we gonna yeah generate a key first of all we generating a reference value then valid values so you see the system works and then we perform attack again and measure again so the clue idea is now that we can generate from the reference and and develop measurement the same key yeah that didn't work it's okay yeah so we see so we see when we measure in the beginning we get different bit error rate from the relevant measurement by 22% and an attack bit error rate by 47% then we're doing some information reconciliation where we used not the optimal parameters so we corrected the bit error rate to 9% and after this the strings are hashed into another value so when the information reconciliation doesn't really work well so we don't get a bit error rate by 0.0 of course we go to a bit error rate by nearly 0.5% yeah 0.5 well we can optimize this more and more over time so you see the curves above where the white I think is the reference measurement we had and the green one is the very similar and the red one is the measurement after the attack so the characteristic really changes a lot and with more parameter optimization it is possible to generate a key perfectly yeah yeah what? 0.5% as well okay okay so there are some pictures if the demonstrators wouldn't have worked so I skip them and we go to the key expression how this works and what are the design requirements for fingerprints we have three design requirements for fingerprints first of all the key quality is very important because if we generate a fingerprint where every values are just 0's or 1's there isn't really a randomness and if this isn't achieved well it's not so good because we want at least 120 it was maybe just yeah 128 bits of randomness to be extracted the second point which is important is the reliability so if we measure over time we don't want that the system generates different keys so we always want to get the same key generated because when we destroy our own key with false positive that's very bad the last point which is important is sensitivity we want to detect even the smallest to detect even the smallest attacks and if our system gives us a very good key to hold time but doesn't detect any attacks that's very bad yeah so this is reliability versus temper sensitivity reliability means when we measure again and again we get a very low bit error rate and when an attack is happened we just get on very high so this wouldn't get corrected by information reconciliation what we actually doing is we have reference measurement we have valid measurements they get quantized and after this we do information reconciliation the goal is of course that after this information reconciliation we have the same value in our valid measurement and our reference measurement because after this step the values are hashed and so even one wrong bit would lead to a bit error rate by 50% or nearly 50% so I skip this for you because I think so summary we learned three things first of all that physical access enables major attack vectors therefore bus probing which the evil bird should demonstrate and of course some things like side channel attacks power measurement and stuff next thing is no system level protection for community hardware existing at the moment on the market so especially no things which are working without battery and the last thing we did was presenting a solution we call enclosure path that is based on standard hardware so it's very very cheap and you know we can deploy this on many systems which so there aren't new systems to be created we can extend IC security and PCB security and it fulfills the ATP criterias so thank you very much are there any questions if you do have a question please line up on the microphones if you have a question on the internet just ask on the net I think we have one online question there are two questions there are two questions from the internet the first one is what kind of enclosures are stable enough over a long time and in changing environments for this approach sorry could you repeat that what kind of enclosures are stable enough over a long time and in changing environments for this approach this is a very good question so the reliability is actually it's quite dependent on the application so this really depends on the environment if it's an aluminium case it's a case of steel made of steel it's a combination in which environment is it installed do we need to secure the system during transportation or just after it gets installed so it's really application dependent the answer is extremely application dependent there's no clear answer second question how difficult is to distinguish between attack and just bumping on the case of the device well it's a good question this is probably the easiest way to denial of service the system you can just pump on it yeah and this is it's again the same problem then for HSM you can drop an HSM and try to use it afterwards the most case is not possible anymore so this is again something where we have to parameterise this triangle I mean key quality is nothing we can really change this is given by the physics and the measurement system but sensitivity and reliability is something we have to agree on kind of a compromise and this is also dependent on the application thanks thanks for the talk I have a question is it possible to have some kind of backdoor planted into this kind of system I mean like some kind of hole so you can just plug some probe we thought about this for a moment because if you have a false positive and you destroy your critical security parameters of course this is terrible and what we thought about was that we could we initialise the system in a secure environment and it might be possible you can code it to have a function where you can extract the legitimate key right off the start and store it in a somewhere safe location so yeah you can back it up if that's the question but having a typical backdoor that enables us yeah we can just apply some kind of temper that will go undetected that's not something we ever built the question is excuse me one question not a backdoor itself but maybe some kind of weak point so later someone else can do this kind of probe and extract your keys afterwards nope that we planted so no I mean we are also very open to search for collaboration specifically for for partners to try to attack the system as I said my shy colleague Johannes he's doing nothing else than trying to attack the system and until now we didn't found any potential useful attack but I mean of course we could add backdoors but this is nothing we will do all right microphone number two your question hi Christian nice work just few questions how do you calculate the secrecy capacity for the entropy and what do you use for quantization information reconciliation and privacy optimization all our secrets from the company are asked so mutual information is estimated by a canious neighbor mutual information estimator and then we then we substract just the mutual information by a potential attack from the mutual information between so X is the reference and Y is the legitimate case why Z is the attack and this is how we calculate the capacity and as I said we use a canious neighbor entropy estimator which is also so if you have questions I answered in my dissertation for quantization we use equal distributed quantization scheme and for information reconciliation we use a fuzzy extractor based on classical error correction codes with a good parameterization which doesn't work in the demo but yeah thank you can you do converse like I have ordered 1000 chips from a factory and I show that they are actually equal and not contain any hidden bug so this is a question where we really like people who are sponsoring our work by sending us 1000 chips and then we can figure this out what we did do however is we evaluated 100 different environments that were all equally constructed we can talk about this in more detail later on we did evaluate 100 different environments that were only slightly different and no two fingerprints were even similar signal angel any more questions from the net two other questions from the internet the first one the method requires a precise clock for frequency stability is there any mitigation to compensate for high gravity applications I really didn't really understand it sorry can you repeat it yeah the method requires a precise clock for frequency stability is there any mitigation to compensate for high quality applications yeah I mean this is one of the biggest problems we need to solve we can talk about this topic offline so if you want to know how to maybe correct face information you can write an email we have this email address 35c3 at 560a and then we can maybe talk about this offline microphone number 2 hi thanks for the great work actually what you presented here was only detection of problems at boot not in the runtime so actually I can boot the system open it and prob it for the key could you please state your question what do you have any counterattacks yes I mean this is a good comment and this is something I I mean I said at one slide maybe you remember that there are two protocols when you were coming from the offline to the online case and then one protocol which evaluates the environment during runtime which is not part of this presentation I think to answer this question properly it makes also sense to talk about this maybe offline if it's okay microphone number one, your question so in your pin demo what have you done to make sure that the tamper trigger is actually based on the presence of the hole or the pin or so on rather than deformation of the entire case we can't actually so when you do some torsion of the case you will destroy the key so depending on the application we need maybe mechanically decoupled systems where maybe influences from the outside are not changing the enclosure inside so I would say are you sure that you've actually measured the presence of a hole rather than the case in your tests so in this demo I'm not sure but in our lab environment I'm pretty sure because we fixed everything and there was just motor which is doing the hole everything is fixed signal angel question from the internet how can I trust the system that during production the physical key was not extracted by the manufacturer as it is not changeable by the owner of the system this is a very good question and it's not a question we can actually answer with this technology in general we can think about solutions trustworthiness by the manufacturers but we could also think about something like the initialization of the system is not done in the manufacturing process but afterwards or maybe partly somehow but this is I would say this is a general problem for manufacturing there's one solution we thought about and that is that the system comes delivered in an uninitialized stage and then you as the end user you did it you put something in that wasn't in there before and there are several things you could think about like metal meshes that you as a user as a final user just insert and pick yourself so this is something the manufacturer cannot guess thanks how does the system compare to the one published by Furnhofer Isaac last year which for my point of view or as I understood it just measures the integrity of a foil that is wrapped around instead of the whole like physical sizes that seems more reliable to me how does your system compare with that so maybe you saw the reference in our slides so Vincent Imler and his group was in our reference so as we said the difference is the PCB level and the system level so when you wrap a foil around an entire system and this I mean the prototype system was like this big you will have huge problems I mean when you think about an ATM to put foil around it it will be very complicated so I think this is the biggest differences between like PCB or chip level security and system level security what we do but it's still comparable so you mean it's difficult to wrap a foil around a large system yes exactly and then heat problems and stuff like this microphone number 2 again I think you were first for me it seems like this is really even more than usual past temperature sensitive how much have you looked into that with respect to temperature I would think it would change a lot so if you're wrapping a whole ATM that does get warm someone bumps against it it seems like you're going to have your data removed like every single day so Johannes is like laughing because this is kind of this current so yeah I mean temperature stability must be given somehow we can extract fingerprints for different temperatures we have a we have a resolution of about one degree so when we change the temperature of one degree we kind of have to re-initialize the system we have to take care that this is given we can do this in the initialization phase but this is something important and we did a lot of measurements with cooling chambers to do exactly this yeah I mean when you re-initialize the system for the different temperatures you achieve the reliability thank you by the way you're missing a site on page 21 thank you very much we were alright we're out of time for this talk are you guys still around for the questions in the building okay cool if you well, first of all, thank you