 Good afternoon, everyone. Thank you very much for introduction. So as already said, I'm going to present about non-mailable codes for space-bounded tempering. So first of all, what are non-mailable codes and what are they good for? So non-mailable code is a coding scheme. So first of all, we require correctness. That means that if we have a message and we encode a message and then we run decode, we get the original message. The second property is non-mailability. Informally, what it says is that if we have a message M, which we encode, and then we apply a tempering algorithm A to this code word, we get some tempered code word C-prime. And there are three options for this code word C-prime. Either it decodes to the original message M, or it's an invalid code word. In that case, the result of decoding is some failure message. And the third option is that C-prime decodes to some M-prime, which is unrelated to M. Informally, that means that it doesn't contain or reveal any information about the original message M. So let me emphasize here that we always define non-mailable codes with respect to some tempering class A, but I will talk about this tempering class in more detail later. So what are non-mailable codes good for? So let me demonstrate it on the following example. Assume that we have a device that executes some cryptographic functionality, for example, digital signature. And the secret key is stored in the memory of the device. And now what happens if someone tempers with the device? By that I mean that we apply the tempering function on the content of the memory, so this SK, the secret key, gets tempered to SK-prime. So now the input-output behavior can reveal information about the original secret key. And this was demonstrated by Bonet, Demilo, and Lipton in 2001 by the famous fault injection in RSA digital signature, and they show that injecting a single random fault reveals the entire secret key. So we have to be concerned about memory tempering and we need to look for ways how to protect against it. And one way how to protect any cryptographic functionality is via non-mailable codes. So how does it work? Instead of storing the secret key in the memory of the device, we store encoding of the secret key. So now, of course, in order to run the functionality, we first have to run decode, but by correctness of the non-mailable code, the input-output behavior didn't change. So what happens now if someone tempers with the memory of our device? Well, by definition of non-mailable codes, there are three options. Either this C prime decodes to the original secret key, and in that case, the tempering was useless. So the adversary didn't learn anything. In the second case, the C prime is an invalid code word, and in this case, the tempering was detected and the device would typically self-destruct. And the third option is that SK prime and SK are unrelated, in which case the input-output behavior don't reveal any information about the secret key. So this is a very high-level idea why non-mailable codes can protect against memory tempering. So the big question is against which tempering algorithms can we protect? What about this tempering class A? Can we hope that we can protect against all possible or all efficient algorithms? Well, unfortunately not, and here is the attack. So suppose that the tempering function can decode to learn the original message M. Then it changes this message M to somewhere related, message M prime, for example, flips one bit, and then it can encode this M prime to obtain C prime. So now, of course, this tempering algorithm managed to temper to a code word C prime, which decodes to a related message M prime. So what's the conclusion of this attack? We have to restrict the tempering algorithms. We cannot allow all efficient tempering algorithms, and in particular, we can never allow the tempering function to run both decode and encode. So what are the possible tempering classes assumed in the literature? We can classify it in two main groups, either granular tempering or global tempering. So what do I mean by the first category? So let's have a look at this example. The granular tempering says that the tempering functions can act on individual components of the code word. So in the picture, you see that the code word was split in two different parts. We can temper with both of these parts, but independently. So for people familiar with the topic, this was an example of the split state model. What I want to point out here is that in this granular tempering, the tempering functions can never run decode because they don't see the entire code word. So on the other hand, in the global tempering, the tempering function sees the entire code word. So now it makes sense to distinguish if the tempering function can decode or cannot. And all previous works on non-malleable codes with global tempering do not allow the tempering function to decode. And our focus is on the second part, when we allow the tempering function to run decode because there's many natural tempering scenarios. This is indeed desirable. And why is that? Let me demonstrate it on the following example. So assume that we have, for example, a mobile phone or some very limited device, and someone tempers with this device. And now we need to put some restrictions on this, for example, virus that infected our mobile phone. So what are the restrictions? Well, where a natural restriction would be to say, okay, the virus can use all space and all resources available on the device, but not more. So what would it mean if I say that the tempering function cannot run decode in this scenario? Well, that means that we cannot run decode on the device itself. So we cannot execute the original functionality, which is not what we want and it's an imitation in this specific case. Instead, we want to run decode on the device, but that implies that the tempering function can run decode as well. So this motivates our model where we restrict the tempering function by space or the memory of the device. So in other words, our tempering class consists of all tempering algorithms which are S space bounded for some parameter S. We want the space complexity of the decode algorithm to be less or equal than this S. So these two conditions together imply the tempering function can decode. And of course, we cannot go against the impossibility results. So we want the encoding algorithm to require more space than S. So just very, very briefly, why doesn't the trivial attack I showed at the beginning, why doesn't it work? Well, the tempering algorithm can run decode. It can change it to some related message and pry, but the third steps fail. It doesn't have enough space to run the encoding algorithm. So that's the main idea. So the rest of my talk will be slightly more technical. So first of all, I will explain why in this model we cannot achieve the full non-malability, the classical non-malability definition. And that led us to define a notion of leaking non-malable codes. And at the end of my talk, I will briefly talk about our code construction. So first of all, let me be slightly more formal and explain non-malability more formally. So we have the real world and the ideal world. And in the real world, we have the following tempering algorithm. The message M gets encoded into a code word. Now we apply a tempering function A. And if this tempered code word C prime decodes to the original message M, then the tempering experiment outputs some symbol, for example, same. Otherwise, it simply outputs the result of decoding. So this M prime. So what does the classical definition of non-malability say is that for any tempering function from this class, there exists a simulator which without knowing the original message M can simulate this tempering experiment. So why can we not achieve this full non-malability? Here is an attack. So assume that we have a tempering function which has hardwired or precomputed two different code words, C0 and C1. And they are encoding of two different messages. And what the tempering function would do now, it can decode to learn the message M. It looks at the last bit of this message. And depending on this last bit, it tempers either to C0 or to C1. So in this way, exactly one bit of the original message was leaked. So intuitively, why does the simulator cannot simulate this tempering game? Well, it doesn't know the message M. So it can only guess the last bit. So this attack can be actually modified because the tempering function can have polynomially many hardwired code words. And therefore, it would leak logarithmically many bits of the original message. So this motivates our leaky non-malable code or a slightly weaker notion. So what is the difference? Now we allow the simulator to learn some bits of the message M. So in other words, or more formally, the simulator has access to a leakage oracle. And in total, it can learn L bits of the original message M. So let me say a few words about this leaky parameter L. So first of all, if L is equal to K, which is the bit length of our original message, then the simulation is trivial because we can simply leak the entire message and simulate the tempering experiment. So this is not a very interesting case. On the other hand, I showed on the attack or I tried to explain to you that logarithmically many bits can always be leaked. So this was our goal and we actually achieved it with our code construction, which I will talk about in a little bit. You might wonder if this leaky notion of non-malable code still satisfies for the main application of non-malable codes. Can we protect against memory tempering with leaky non-malable codes? And the answer is yes for some functionality, yes. So if the functionality is resistant against L bits of leakage on the secret key, then we can protect it with our L leaky non-malable code. So for many applications, it's still good enough. So let me now switch and talk about our code construction and the main building blocks and why we actually satisfy the leaky non-malability. So the main building block of our code construction is non-interactive proof of space. So there have been several works on proof of space or the non-interactive variant of it. All of them are in the random oracle model and based on graph pebbling, graph labelling techniques. So what's the main idea? I don't want to talk too much in detail about non-interactive proof of space, but what's the main idea of it? So we have a party prover, which is supposed to have a lot of space and it wants to convince a space bounded verifier that he indeed has a lot of space. So how does he do it? On challenge M, he generates a proof and this generation of the proof requires a lot of space and it sends this proof to the verifier. And now the space bounded verifier is able to verify if the prover indeed had a lot of space or not. So the first property I just said is the efficient verification efficient in terms of space. And the second property is completeness. If both parties behave honestly, then the verifier should be convinced. And the third property is soundness, which means that if the prover doesn't have enough space and cheats, then the verifier with high probability is not convinced. Actually, for our non-malible code, we need a stronger definition than soundness. It's a special form of soundness, which we call extractability. I will refer to our paper for more details about extractability and also for the proof that the construction of REN and DEVADAS actually satisfy our extractability definition. So how do we now build a non-malible code from non-interactive proof of space? It's actually very simple. The encoding algorithm gets us input a message M and runs the prover, runs the proof algorithm to generate a proof of space. This step requires a lot of space. And now it outputs the message in plain text and attach the proof of space to it. And how does the decoding algorithm work? Well, it first parses the cold word into a message and a proof of space, and it runs the verification algorithm. And if the verification algorithm says one, then it simply outputs the message M, otherwise it claims that it's an invalid message. So what we proved, the main theorem of our work is that if we have for some parameter s non-interactive proof of space, then this construction that I just briefly explained is a leak in non-malible code with respect to all s space bounded algorithms. So, and I would like to emphasize that our construction is in the random oracle model. So what's the idea of our proof? So first of all, correctness of the scheme follows directly from the completeness of non-interactive proof of space. And the second part is more interesting, non-malability. So let me recall what we actually need to prove. We need to prove that for every s space bounded tempering algorithm, there exists a simulator which can simulate the tempering experiment with only logarithmically many bits of leakage on the original message. So there are three options how this tempering function can actually temper. Either it tempers to a code word with decodes to the original message M or it's an invalid code word. And the third option that it's some different decodes to some different message M prime. So let me discuss just the third case which is the most interesting case. So what does it mean that a space bounded tempering algorithm tempers to such a code word C prime? That means by the construction that he needs to know the proof of space. And he needs to know a proof of space that verifies with this message M prime to one. And how can he know it? There are three options. He can guess it. He can always try to guess. He can compute this proof of space or it can have it pre-computed or hardwired in its code. So first of all, the second option is not possible because of the proof of space. That's the key idea. It doesn't have enough space to compute it. He can always guess but he will be correct only with negligible probability. So let me demonstrate how we build a simulator for this third case. So the tempering algorithm can have polynomially many hardwired code words as I said a few slides ago. And now by extractability of the non-interactive proof of space, we can create a table that contains all messages. So this step requires some technical arguments which I would like to refer to our paper for more details. But I would like to give you a high-level idea how the proof works. So we can extract a table of all these hardwired messages. So what the simulator does, it asks the leakage oracle the following function. It first simulates the tempering algorithm to learn M prime. Then it looks into this table and search where this M prime is, finds the index in the table, and then leaks the binary representation of the index. So it essentially leaks the index in the table. Now the simulator can look in the table, find this M prime, reconstruct it and have this as its output. And the simulation was correct in this case. Our proof can be very easily extended. This was a very high-level idea to repeated tempering, adaptive repeated tempering, not just one-time tempering as I was explaining during my talk. So then we achieve a trade-off between a number of tempering rounds and the leakage. So for every tempering round, we need logarithmically many bits of leakage. So let me quickly summarize my talk. So first of all, we consider global tempering where the only constrain or only restriction we put on the tempering algorithms is the space they are allowed to have. So they're space bounded. We show the impossibility in this model to achieve the full non-malability. Therefore, we introduce the leaking non-malable codes and we show a very simple construction which is based on non-interactive proof of space. And finally, we also show that for many applications, it still can protect against memory tempering. So this is all. Thank you very much for your attention.