 Hello everyone, I'm very happy to present you the paper Trojan Resilience Without Cryptography, written by Shufra Di Przakraborty, Stefan Dziembowski, Tomasz Lisury, Krzysztof Piedrzak, Michelle Yeo, and me, Mogło że te gałązka. So hello, and what is this paper about? From the most general perspective, I would say that we investigate the countermeasures against digital Trojans and existing cryptographic solutions against digital hardware Trojans mostly often use some heavy cryptographic tools. In our case we focus on extremely simple constructions and still we achieve some meaningful security notion. So the contemporary context of using cryptographic devices looks like that. There exists a user who needs some device realizing functionality F. He may even possess the circuit description, but she's not a manufacturer by herself. So she simply sends the circuit description to some untrusted factory and receives a circuit, which is hopefully produced according to the specification. Unfortunately, such naivety may be explored by some bad people. I mean, we can easily imagine the opportunities given to the adversary if he can simply produce the cryptographic device. And here you see some headlines from popular newspapers describing successful attacks, which were exploiting digital hardware Trojans. Some of them probably are still uncovered. So it's a real life problem. And such it of course has its own place in the world of physical attacks on the cryptosystems. And now I'll tell you a bit more about this. So where the hardware Trojan horses live. We know that the black box model of cryptographic or even computing device was challenged in the middle of the 90s. Some of us would say that even before. But anyway, in the middle of the 90s there were published many papers, which were describing successful attacks on cryptographic protocols exploiting the fact that the cryptographic device is a physical object. According to the type of the interaction between the adversary and the device, I think we can divide the attacks into three main groups. The first one are leakage attacks. I would say that in this case the adversary is the most passive. It simply observes and measures the cryptographic device. Sometimes this device is modelled as a logical circuit and then the adversary can simply get to know some values on the wires during the computation. I sometimes it is not modelled this way, then the leakage is some function, some bounded in the output function on the internal private state and the input and so on. The second type are tampering attacks, where the adversary is a bit more active. She can influence the behaviour of the circuits. She can modify it. And the third type are headway trojans. And here the adversary simply produces the cryptographic device. There are some papers which investigate this problem in a very systematic way. Just to mention for the leakage there is a paper private circuit for the tampering private circuits too. In the trojans there are also some papers and yes there is a paper private circuit. So as this problem is very practical, very common, of course exist different solutions. And the first group of the solutions are physical containers. This approach consists of many different methods such as physical scanning, optical inspection, electromagnetic analysis, input-out-of-analysis and many other. Unfortunately, they are quite costly. I mean, we need to check every single circuit. And secondly, even if the circuit pass all the tests, we are not sure if we uncover the malicious behaviour of the circuit. There is another group of solutions. I called them cryptographic countermeasures since they make use of some cryptographic tools to make the circuit more secure. They work as follows. There is a user who has in mind some functionality F, puts this F into a kind of compiler and this compiler outputs a description of the device which would realise the functionality F and be secure. And as we can see this device consists of two main parts. The one in the centre is a trusted part. And the other ones are untrusted ones. And there can be only one untrusted or many of them, it depends. And the user sends a specification to two manufacturers. The honest one, which produces the honest component and to untrusted one, which produces potentially malicious circuits. After receiving both components, the user make a device from both of them in a trusted manner and then can use it. I sometimes the testing phase is added by the testing phase, I mean the input-output testing, I will tell you more about it later. And what's important here that we really need this trusted component, we can ask ourselves if it makes any sense to make such a complicated procedure if the user has access to the trusted manufacturer. And the answer is, of course it depends. The size of the trusted model must be much smaller than the size of the circuit, which would simply realise the functionality F. Sometimes the user and the trusted manufacturer are the same entity. We can imagine if the trusted component is very simple, we can make it simple at home. No, now I will present you the existing cryptographic solution against hardware drawdance. I think this solution is quite natural. The device which would realise the functionality F consists of two parts, two components. The trusted one is a verifier and the untrusted one is a prover. I would call it verifiable computation approach. And it works as follows. The user sends input X to the verifier. The verifier sends it to the prover. The prover computes F from X and a prover sends the result and the prover to the verifier. The verifier verifies and if the verification was done correctly, then it sends Y to the user, otherwise it sends it to the verifier. From now we will live only in the work of digital hardware drawdance. The digital hardware drawdance is a surplus of hardware drawdance with some restrictions. First of them is that the digital hardware drawdance has no auxiliary input-output channels. And the second restriction is that they have no physical blocks, antennas, sensors, etc. Just to give you the flavour of it, we say that digital hardware drawdance, they behave maliciously only internally. By which I mean we can treat them as a black boxes which have no communication with the environment other than the provided one by the specification. I here I would like to emphasise that even if the specification is stateless, the digital hardware drawdance can be stateful. So there exists some research on the topic of status drawdance. Now I have in mind the countermeasures against algorithm substitution attacks, but they are not very interesting for us. Since the communication of the digital hardware drawdance is honest, the input-output testing countermeasures seems very promising. So now I'll tell you more about the testing. Ok, so if the device is tested before releasing it, we can say that the cycle of its life has two phases. And the first is the lab phase. And in the lab phase, here this blue one is the device and here is the trusted implementation of functionality F. And the input X is given to both of them. And then we check the quality if the device and the trusted functionality are the same value. It is done for some number of times. Mostly often we bound the length of the lab phase by some constant. And then there's the white phase where the device is released and used. And it is very simple then. It just receives some input X and outputs some Y. So I've called this simple testing. We can take a look how we can adapt to protocol. So first of all, in the wide, the adversary can influence the given inputs and the device can deviate whenever it gets some very rare subset of the inputs. Then it will be not revealed in the lab phase, but in the wide phase, the device can start deviating. We call such type of attacks, two codes. And the second type take advantage from the fact that the number of tests is bounded. Here is this red time where the adversary can deviate and will not be called for sure. And we call such type of attacks time bombs. Okay, so now we have some intuition what can go wrong when we test. And we can have a look at some other paper which is the promised private surface three and how it works against them. So in this paper, the following construction was proposed. The devices are tested in the lab phase independently for some random number of times. And then in the wide, the following construction will work. The entrusted parts are arranged in the triples. The master in every step received some input x. This input is secret we shared to the triples. They follow the protocol, compute the value of f of x. And then the master circuit compute the majority of the outputs and output the majority. During the wild phase, some of the triples may start deviating. But the probability that more than half of the triples start deviating through the wild phase is negligible. So the master always output the right value. I here I think this picture is a bit misleading, since the lab phase must be much longer than the wild phase. But now we can see how it works against these two types of attacks, which I mentioned before. I mean cheat codes, so against cheat codes works the secret sharing. Even if the adversary can influence the inputs in the wild. From the perspective of a single circuit it receives just random values. And this this synchronization by designing that the lab phase can be very long and very short comparing to the wild phase. And the length of the lab phase differs for every single circuit. We have the countermeasure against the time bombs. So now we can move to our construction to our research. We were investigating the very simple constructions. And now I will tell you more about the circuits in this constructions. So first of all, the master in simple constructions are very, very simple. By this, I mean that they can contain only a few equality multiplexer and repetition gates. And secondly, there are no changes in the specification of the untrusted circuits. By this, I mean that if we have some circuit, which would realize the functionality F, then we can simply use it in our construction. And this has two big advantages. First of all, we have no overhead in size cost of time. And secondly, we can simply reduce already produced circuits. So I think that this without cryptography part in the title of our paper comes from this. Yes, we have no heavy crypto here. How does the security game looks like for the simple constructions? Again, we have the lab phase, which lasts for some time, where by time I of course mean the number of tests. And the total number of tests performed on all of the devices bounded by some number T. And then we go to the wild. And in the wild, the following construction will work. The master is joined with every circuit separately. It receives some inputs and auxiliary randomness and then outputs a value. And what are the goals of the adversary in this game? So first of all, she needs to survive the lab phase. As you remember, in the lab phase, we're checking the outputs of the circuits with the trusted implementation. So surviving the lab phase means that no misbehaving is detected in the lab phase. Secondly, the adversary wants to survive the wild phase. What does it mean? As you remember, the master circuit may contain the quality gates. So if it receives some inputs and gives one input to two different circuits and would receive two different outputs, then the game stops immediately and the adversary just loses. Okay, up to now it's very easy. I mean, the adversary can simply produce the honest circuits. We have this third goal, which is make the master to output many wrong outputs. So what is the main problem with simple constructions? When we already know the circuits and the game, the security game. So first of all, it's vulnerable to cheat codes. I mean, we had no countermeasure against the cheat codes here because the master receives simply the inputs. And with a few gates can do anything with them. So to get rid of it, we assume also that the inputs in the wild phase are ID. So now we are ready to give a bit more formal definition of the security game. So the Trojan game has three parameters. The first one is the construction P. The second one is the bound T on the length of the lab face. And the third one is the length of the white face Q. And the game has three following steps. The first step is that the adversary chooses the functionality F and the Trojan circuits. The second step is the lab face is performed. The lab face is performed on IID inputs and the outputs are verified by the trusted implementation. And the length of the lab face is a random variable, a uniform random variable from zero to T. And the third step is the white face, which is performed on IID inputs as well, which is verified only by the crosschecks. The equality gate by the master. Okay, and again the goal of the game is not being caught in the lab and white phases and to make master output many wrong outputs. And the adversary has to achieve both of these goals simultaneously. So if there is any error during the lab phase of the white phase, the game immediately stops. Now I can tell you something about the limitations of simple constructions, because there exists some. Consider the following adversarial strategy. Let's say that she produces the circuits which deviate on the one over T fraction of the inputs. And what's important, all of them deviate on the same inputs. So in this strategy, the adversary survives the lab phase with probability one over E. It's very easy math here. And, and she survives the white phase with probability equal to one. Why? Because all the circuit also always answer in the same way for the same input. And the master can only check the quality of the outputs. To go to our result we need two more definitions. So the first one is winning the game. And we say that an adversary win wrong wins in Trojan game. If the master outputs more than a wrong fraction of wrong values without the Trojans being detected with probability greater than win. Ok, so this is the game and the security definition. We say that P is win wrong Trojan resilient. If for sufficiently large two and Q. No adversary win from T wrong from T wins in the Trojan game P T Q. Ok, maybe we need some time to parse it. Now I have some good news and bad news for you. For us, actually. So, as always, bad news first. So for any C greater than zero, there exists a constant C prime greater than zero such that no simple skin P is C comma C prime over T Trojan resilient. It comes from the previous strategy which was presented two slides ago. And the goodness are that nothing worse can happen. And what does it mean? I'll tell you in a few minutes. Just to give you the intuition, why our construction, which consists of 12 potentially malicious circuits works. We will go through some a bit simpler constructions. So the first one is the construction for stateless Trojans. Here the number of tests is a random variable uniform random variable bundled by team. There's no need to have any trusted model here. And this construction meets the optimal security parameters. Ok, I was presenting in before when I was talking about the testing. And there were two problems. The first one was the cheat codes and the not cheat codes, since the inputs in the white are ID and time bombs, and we have no time bombs, since the Trojan is stateless, and we cannot remember how many times it was used. Yes, and as I said, adding the counter, make it fail. The second construction works for history independent Trojans. In other words, the counter Trojans, by which I mean they have the internal states only indicates how many times it was used in total. So circuits have counters. Only one circuit is tested as before for some uniform random number of times. And in the wider cross check in every single step. So in the lab phase, this circuit was cross checked with trusted implementation in the wide, the cross checked with themselves. And this cross instruction meets the optimal security parameter as well. It's not very interesting by itself, but we use it in our proof for reduction. I'll show you later where, maybe not how exactly, but where. And the proof that this is secure is not trivial. So I'd like to emphasize that this construction is not secure against general, stateful, digital hardware Trojans. The circuits are cross checked in the wild in every single step. One strategy would work very well. I mean deviated once, deviator always. Why? Because from the perspective of F1, for instance, if I deviated, there are two possibilities. The first one, I was cross checked with F2, which also means behaved. And then F2 has the same strategy so we can deviate from now. And the second possibility is that I was cross checked with F2, which did not deviate. And then game is over anyway. And now I can show you the construction, which we believe is secure against stateful digital Trojans. And it's very similar to the previous one. The only difference is in the wild in the cross checks. The circuits F1 and F2 are cross checked only with quite small probability. And in other cases, only one of them receives the input. And then if F1 deviated, there are three possibilities. Which I mentioned before. And the third one, that it deviated, but wasn't cross checked with F2. And the game is still not over. So yeah, I just said omitting circuits allows this synchronization. Okay, so now I'm telling about four-circuit construction, which is a building block for our 12-circuit construction. And it works as follows. The first circuit 1 and 3 are tested. The master in the wipe gets two inputs and random bit B. And according to the value of the bit B, the inputs are distributed. And after the distribution and receiving the outputs, the master performs the cross checks. If the cross checks are correct, the master outputs the output of the first circuit. It's here. If they are not the game stops. So here's the example how it works. How are the cross checks in the case of bit B equal to zero. And how it works if bit B is equal to one. Okay, so I think we are ready to present our technical result and the construction, which means it. So for any constant C greater than zero, there exists a constant C prime such that a simple construction between which will be presented in the next slide is C, C prime over T Trojan resident. And this is how the construction looks like. So this picture was on the previous slide. The construction consists of three, four circuits sub modules. And the master in every step receives three inputs, and distributes them according to the value of the bit B, just as written here. What's important, all of three outputs can be used. For history independent Trojans, we need only two circuits to make a construction which is secure. And I told you that it's not very interesting, but we use it in our group. So here this yellow part of the picture shows. This yellow part shows the part which is reduced to the security of of two circuit constructions for history independent Trojans. Okay, so one more sentence. Why we do all of this stuff. So, as I told you when was, I was describing our conjecture about the two circuit construction secure against general stateful digital hardware Trojans. I do something like, if the two circuits share the same history, they can benefit from the beat it was the beat for always strategy. And in this conjecture they don't share the same history and they cannot benefit from anymore. I mean, similar situation that F1 is crushed checked with F2 and F3, but they do not share the same history of same inputs. These histories are dependent, but not the same. So this is the general reason why it works. I don't want you to read the paper because I think this profile is very nice. But of course, you can ask yourself why I mean this security notion, which I mentioned is not great. By this I mean that it is meaningful. But at the same time. It doesn't fit to every application. So what are the possible applications of our constructions. And I'd like to say that only that there are two main restrictions. The first one is there is a non-negligible fraction of wrong outputs with non-negligible probability. So we have to tolerate some fraction of wrong outputs as long as there are not very many of them. Just to remind you, there was one of the fraction of wrong outputs with constant probability. For instance, for PRDs. We are actually not very interested in the outputs to be correct, but to be indistinguishable from random. So this is a possible direction of the further research. And the second restriction is that the inputs must be IID. It can be relaxed, but still they cannot be controlled by the adversary. So we currently work on the stream ciphers and we use this construction as a building block. So just to compare our paper with the existing solutions. Here are these three methods, which I mentioned in my presentation. The first one is the MPC from paper 3.3. The second one is a variable computation from variable Essex. And this paper. In MPC solution, the lab phase is much longer than the white phase. And the specification is very complicated. For the variable computation, we have no witness warranty and the master circuit is very expensive. In our paper, we have no witness, witness warranty and have weaker security parameters. But what are the advantages of this construction and the advantage of the MPC is, of course, witness warranty for variable computation. The length of the white phase is unbounded and the outputs are always correct when provided. Of course, almost. And in this paper, we have also unbounded work phase. The master circuit is very cheap. And what's important also that we have a blockbox access to the functionality. And we'd be very happy to answer any of your questions. Probably I won't be on this same person. I'll be available online on the session. Thank you very much for your attention. Bye.