 Hello there. Welcome to my talk on subversion of ZLDN public key encryption with practical match talks for PKC 20.21. My name is Pascal Bemmer and this is joint work together with Romar Gen and Timo Jano. Back in 2013, the Snowden revelations showed that state-level adversaries were able to influence and manipulate the implementations of widely used cryptographic schemes. This then started a series of scientific works showing how to model these kinds of attacks and also showing possible countermeasures. One possible countermeasure are the so-called Ruiz firewalls introduced by Mironov and Stevens Davidovitz at Azure Crypt 2015. A Ruiz firewall is basically a proxy outside of the corrupted machine with access to good randomness. This machine then usually re-randomizes the traffic in order to remove any possible biases embedded in, for instance, signatures. Here I emphasize that it is usually the case that you use some re-randomized with primitive, but at Azure X 2020, I'll show that there are ways around this, so you don't necessarily need re-randomized with primitives. Another model or approach are so-called self-guiding schemes introduced by Fischlin and Masahiri. There you have an on-assembling phase where your scheme behaves honestly and you can collect samples, for instance, of signatures. Then the subversion takes place, the implementation changes, and then you use the samples in order to sanitize subverted outputs. It is very similar in the spirit to Ruiz firewalls because they have some good source of randomness which you can use in order to remove biases. In this work, we will focus on the watchdog model, which were introduced by Belar et al. in 2014, but the party was not yet called Watchdog and also by Russell et al. at Azure Group 2016. And Watchdog is a trusted party that tests subverted implementations before you use them. There are different classes of watchdogs, and in this work, we will focus on offline watchdogs where the watchdog does an onetime offline testing and afterwards the security experiment with the adversary is executed. In general, in this watchdog model, we have two phases. In the first phase, the watchdog tests the implementation provided by the adversary. For this, the watchdog is aware of a specification, so it knows the input and output behavior of the scheme considered. After the watchdog provides its implementation, the watchdog can test via black box access whether the implementation confirms to the specification. This is done by all the queries, so it just compares the input and output behavior. According to this, to this test, the watchdog then either approves of the implementation or discards it. Then in the second phase, we have a security experiment depending on which primitive you consider, but the main twist here is that instead of your honest implementation, you use the implementation provided by the adversary to compute, for instance, Schellen-Seifertext. Of course, then the question becomes, what does security mean in this model? In previous works, you use asymptotic definitions where you have that a probabilistic polynomial time watchdog has a non-negligible detection advantage or that the security guarantee of your underlying primitive holds. So there's always one of the two has to hold in order to be subversion resilient. There are some potential problems with this kind of definition. First, you don't have any specific runtime for the watchdog, because if you use asymptotic definitions, you just say, okay, there exists some polynomial time watchdog, but especially if you would want to deploy in practice, it would not clear for how long you would need to run your watchdog. The second point, a non-negligible detection advantage might just not be enough for some specific use cases. For example, you could consider an investigative journalist who has to fear that a backdoor might reveal his identity. In that cases, you rather want a high or even overwhelming detection advantage rather than non-negligible. On the other hand, you could argue that you could simply boost existing constructions and simply repeatedly execute the watchdog in order to boost your detection advantage. However, this might induce a potentially huge testing overhead, especially if testing is not a one-time event. For example, you could consider an investigative journalist who might want to test the mobile phone every time before you use it. With this concerns in mind, in this work, we propose a refinement of the watchdog model proposed by Russell et al. at Azure Crypt 2016. There, we use concrete security definitions together with a concrete bound for the watchdog runtime. In our model, in order to be subversion resilient, you need to either detect the version with overwhelming probability or our wanted security guarantee must hold. Within this framework, we show different constructions. As an important building block, we show how to build subversion resilient running risk generators. These are building blocks that give you random coins used within other building blocks, and we show that this can be constructed and tested in constant time. With this as a building block, we then show how to build efficiently testable CPA subversion resilient public key encryption with the key-gen algorithm and the encrypt algorithm subject to subversion. However, I want to note that there are some drawbacks and restrictions to our construction. The first main drawback is while we can efficiently test it, our efficiency in regards of public key and ciphertext size goes down. So we will have bigger public keys in ciphertext. Additionally, our construction cannot handle stateful subversion. So the encrypt algorithm or the implementation of the encrypt algorithm is not allowed to hold any state between executions. It seems inherent to limited time watchdogs. Reverse firewalls might be a more fitting tool for these kinds of subversion attacks. So before we dive into the details, I want to give you a rough impression on why this is hard to achieve. The main problem with limited time watchdogs are exponentially big search spaces. So the key and the rendering space of, for instance, public key encryption scheme must be exponentially big because otherwise an adversary could simply brute force for a key. Thus a high detection advantage by simply testing many or enough keys is just infeasible. Additionally to that, also a big message space, a big message space might be a problem because there's this set of input triggers where implementation deviates from the specification for some single chosen message chosen by the adversary. Thus even if the watchdog could test many times, the adversary would still have a high success probability where the watchdog would not have a high detection advantage. While this holds if the watchdog would have just black-block access to the primitive. Thus we use a different model where the watchdog has a more fine-grained access to the primitive considered. So what does this model assumption look like? Again, this is based on the back of Russell's isle from Adler Encrypt 2016. In this model, we assume that components may be split into arbitrarily many building blocks. For example, instead of an encryption algorithm, you may have many smaller subcomponents which then again together form the encrypted algorithm. Here we also assume that randomized algorithms can be split into two parts, namely one part which generates randomness and another part which is deterministic and takes random coins as input and both parts can individually be tested. With all these components available, the adversary provides an implementation of each individual building block. The watchdog, which is then again aware of the specification for each building block, can test each building block against its specification. However, we need to obtain a working scheme again for a security experiment. Thus we have a trusted amalgamation which takes these building blocks and puts them back together to obtain a working scheme. This amalgamation is trusted, so it is not influenced by the adversary. However, since it is a trusted operation, we want to keep it as simple as possible. Of course, you could simply shift all the complexity into the amalgamation, however you then would have security by definition and obviously you want to avoid that. So while keeping it as simple as possible, the amalgamation will do things like handing one output of the output of one component to the other or re-execute a single component. The most advanced operation we will need in the amalgamation will be an XOR function. So how does our model then look like? As said earlier, we use concrete security instead of asymptotic definitions. We also, the runtime of the watchdog is a dedicated parameter. So the adversary provides the implementations of the individual building blocks with the watchdog then tests against the specification. If the watchdog detects some behavior that does not confirm to the specification, it chooses a random bit to output. Since in our way we consider indistinguishability problems, this means basically the adversary loses and has advantage zero. Otherwise, if the watchdog approves of the implementation, we execute the security experiment with the amalgamated components and output whatever the security game outputs. Within this model, we showed how to construct subversion resilient public key encryption. However, a very important stepping stone are subversion resilient key encapsulation mechanisms or shortly CAMs. The idea of our construction is that we will sanitize generation of randomness via the von Neumann extractor, which I will show in a bit. With random coins available, we will use several instances of a CAM in parallel and combine the keys via a trusted XOR function. We will see that this forms a secure CAM. And from this, we can see that if you combine the obtained key via a trusted XOR function again with a message, we will obtain subversion resilient public key encryption. So now let's dive into the constructions and we will see how we can sanitize randomness via the von Neumann extractor, which was already introduced in 1951. This is what it looks like. So we have many two building blocks, namely RG, which is a randomness generator which outputs a random bit and the von Neumann extractor VN. So the specification consists of RG and VN. The trusted amalgamation will execute RG twice in order to get two bits, which are then fed into the von Neumann extractor. Of course, how does the von Neumann extractor look like? Well, here's the specification. We get that input two bits and it outputs zero. If B0 is strictly smaller than B1, one if B0 is strictly bigger than B1 and an error symbol otherwise. In case the two bits are equal and an error symbol is output, the trusted amalgamation will simply re-execute the whole construction. This RG and VN are subject to subversion, so the adversary provides implementations of that. How can an watchdog efficiently test this construction? Well, surprisingly, the randomness generator does not need to be tested at all, so for security reasons, we can allow arbitrary biases. VN we can perfectly test, so the watchdog would query VN on all four possible inputs and compare them with a specification. Here it is important to note that we can test this perfectly, because additionally, we also assumed that the implementation of deterministic algorithms also must be deterministic. With all this, it is not hard to show that if the two bits are different and the two bits have the same bias, you get uniformly random outputs. Of course, this only gives you a single random bit, but the trusted amalgamation can simply re-execute the whole construction in order to get arbitrary many random bits. There was random coins being available to us. Let's see how we can construct as a virtual resilient camp. Our idea for our construction is based on the camp combiners proposed by Github Gela at PKC2818. In the world of camp combiners, you have many instances of a camp available, and the goal is to combine them in such a way that you obtain security if at least one instance is secure. In cryptographic combiners, usually you have that this one secure instance is just given by assumption. For us, this is not the case, but rather the watchdog guarantees that at least one execution is honest by testing the used algorithms. The combiner is then part of the trusted amalgamation, and for cams, this is just simply an exo function over the keys. So with this in mind, we have just two building blocks. The first is the phenomenon construction which I just showed you, and the other is just a regular camp consisting of three algorithms, gen and caps and decaps. This is a visualization of how the Gijian algorithm looks like. So we have here our phenomenon construction, which just as I said earlier, just is re-executed until we get enough random coins, and the random coins are then fed into the gen algorithm. Outputs key pairs, and this key pairs then form the final keys for our camp. So sk consists of sk1 to skn, and pk consists of pk1 to pkn. With these keys being available, let's see how we encapsulate keys. So again, we use a phenomenon construction to produce random coins. Afterwards, the random coins and the corresponding public keys are fed into the NCAPS algorithm, which each output ciphertext and the key. All the ciphertexts are just simply output. However, the keys are all fed into a trusted exo function, so k is the exo of k1 to kn. How can an watchdog efficiently test this? Well, as said earlier, the watchdog can test the phenomenon extract and constant time. Additionally, it's signed with the gen algorithm too many times on uniform random inputs to obtain t-many key pairs, and for each public key it computes the ciphertexts. And again, all results are checked against the specification, and if anything is not according to specification, the watchdog outputs are uniform even a bit. Then the most interesting question is, why is this secure? Well, we obtain security if in the challenge ciphertext and the CPA security game, two conditions are fulfilled. The first is that one key pair was computed honestly, and that the ciphertext computed under that key pair was also computed honestly. In the reduction, this allows us to embed our challenge ciphertext of the underlying can. Here we have an asymmetry between the watchdog and adversary, which allows for this result. The watchdog on the one hand only needs one subverted output to observe during testing so that the adversary will lose. However, an adversary not to be successful in the security game needs that each can building block needs to output a subverted ciphertext. Thus, it's much easier for the watchdog to succeed in the adversary. With this being said, we obtain roughly the security of the underlying can if the number of calls to the gen and the end caps algorithm is equal to the number of use cans, which is also again equal to security parameter. You can see that, okay, obviously the efficiency goes down because you need somewhat many public keys and ciphertext. Additionally, some trade-offs are possible based on the number of cans and the watchdog runtime. For example, you can have one value over doubling the other and yeah. Do several trade-offs here. With cans available, how do we obtain public key encryption? Well, we use our cam as a building block and then simply take this c and k and exit the message m that needs to be encrypted to the key k. Then, as an encryption, we output c and k x or m. This seems necessary in order to output input trigger attacks because the message is directly fed into a trusted component. Obviously, this has a downside that the message size and the key size need to be identical. However, for the first time, we can see that public key encryption can be constructed so it is efficiently tested. So to wrap things up, what are the key take-home messages from the stock? We propose a new model with complete security based on the work of Russell et al. For the first time, we focused on minimizing the watchdog's runtime and showed several constructions for this. For once, we showed that the format exporter can be checked in constant time, giving access to uniformed random coins. And as an important stepping stone, we showed how secure cans can be constructed via cam combiners. This gave leverage to constructing subversion resilient popping key encryption, which can be efficiently tested. If you want to look at our paper, feel free to visit the eprint link down below and if you have any questions or comments, feel free to connect me while the email address provides below. Thank you very much for your attention.