 Thank you. So yeah, this is joint work with people, Abhishek, Anat and Slava, and we looked at interactive non-malable codes. So the question is what are interactive non-malable codes? I should explain that. We introduced this notion in the paper. To explain interactive non-malable codes, it is helpful to first recall what regular non-malable codes are. The notion of non-malable codes was first introduced by Tynbofsky, Piotr and Wicks in 2010, and it's basically, so it's a code. So we have a message, we encode it, we can decode it again, and if the code works correctly, we should get the same message back. But somebody could somehow tamper with the code after it has been encoded. So this attacker could use some tampering function f and change the code word, which might result in a different message m' being decoded. Now if this code were error-correcting, then we would have some guarantee that if this function f falls into the class of functions against which the code is error-correcting, then we would have a guarantee that this m' is actually the same as m. However, if the code is not error-correcting, then this is not the case. The problem is that error-correcting codes only exist for relatively small classes of tampering functions, so Tynbofsky introduced non-malable codes. The idea of a non-malable code is that we have a weaker guarantee. So if a tampering function tampers with the code word, this whole process results in some distribution of possible outputs. This is a distribution over possible decoded words and also an error symbol because this decoder could just say, okay, this is not a valid code word. I will not decode this. And the idea is that the non-malable code guarantees that what you get here is either still the original code word, the original encoded word, or it's something that is completely unrelated to this word. So what we're trying to prevent is that an attacker could somehow modify this code word so that you get a related message m'. So how do we formally define this? The idea is basically that we define that there exists some distribution, d, f, that we can sample from, that is sampled independently of the message m, but it can depend on the function f, such that there exists a distribution that we are actually sampling here that is statistically close to the green distribution. This doesn't actually work like this because obviously if you, for example, use the identity function as the tampering function, we have a distribution that is just constantly the message m, which obviously depends on the message m. So the definition says, okay, instead we say we have this distribution which is over a support that includes some special symbol called same. And if this symbol occurs, then we just replace it with the message m. And these two distributions should now be statistically close. If you have that, then you have a non-malable code. So what does it mean to have an interactive non-malable code? What we basically want to do instead of encoding a single message, we want to encode an interactive protocol. So we have an interactive protocol with Alice and Bob. They have inputs x and y. They have some interactions, say three rounds, and at the end they output some function of their combined inputs. Maybe only one of them outputs something, it doesn't matter. If this is a non-trivial protocol, then at least one, the output of at least one party depends somehow non-trivially on the input of the other party. Because otherwise you could just not do the interaction and just output whatever you want to compute. And we want to encode such a protocol. How do we define encoding such a protocol? Well, we say we have two simulators, which are basically now playing the role of an encoder. And these simulators basically interact with black box versions of Alice and Bob. These two black boxes have their input x and y. And then these two simulators communicate in some protocol. This protocol may have more rounds than the original protocol we're trying to encode. And at the end of this protocol, what these two simulators output is a transcript. This transcript is supposed to be the same on both sides if the code works correctly. And this is supposed to be the unique transcript that would have resulted if Alice and Bob would have just communicated directly with one another. So this means what we're looking at here is just deterministic protocols. If you have a randomized protocol, then just see the randomness as one of your inputs. And then we'll find again. So again, somebody could tamper with this protocol. So this is basically the encoded protocol. And a tampering function could try to change one or more of those messages. This means again that we get some output distribution from S0 and S1. And we look at the joint distribution of their outputs. And we want to define again non-malability for these outputs. Which means we define this very similar to the normal non-malability codes that there exists some distribution that we can sample. And again, this distribution is over some set that also contains the special symbol same. And if this symbol occurs, then we replace it with the actual transcript that would have resulted if Alice and Bob would have just communicated directly. And then we have these results in this red distribution which should be statistically close to the green distribution that results from actually tampering with the protocol. So this is an interactive non-malability code. The question, of course, is can we achieve these interactive non-malability codes for which classes of functions can we achieve them? And the first observation is that for arbitrary functions, it's not possible to be achieved. If you have an interactive non-malability code that's supposed to work against arbitrary tampering functions, that's a problem because, well, if you have an attacker here in the middle that can arbitrarily tamper with all of the messages, it can just pretend to be S1 when speaking to S0 and pretend to be S0 when speaking to S1 and use some fixed arbitrary inputs, Y prime and X prime. And then these two parties, these two simulators will, well, honestly just compute a transcript. The transcript will be the transcript for X and Y prime and for X prime and Y which obviously depends on X and Y unless you have a weird protocol that just doesn't depend on its input which, again, would be some weird trivial protocol that we could just ignore. So in general, this distribution that results from this tampering cannot be sampled independently of X and Y even approximately. So this type of INMC against arbitrary tamperings cannot exist. So which other classes of tampering functions could we look at? So there is a relatively large body of work about something called interactive coding which was basically introduced by Schumann in the early 90s and it's essentially error correction but for interactive protocols. And they look a lot at threshold tampering. Threshold tampering is basically when your tampering function can still arbitrarily tamper with your transcript but it is limited in how many bits it can flip in the transcript. So if you basically, your whole transcript contains, I don't know, 400 bits and your threshold is one quarter, then only 100 of those bits can be flipped by the tampering function. And what we know is that there are lower bounds for interactive coding so basically for error correction for these tampering functions. We know that for non-adaptive interactive codes which basically just means that the protocol does not try to detect that it's being tampered with and adapt to that fact. If this does not happen then Braverman and Rao showed in 2011 that basically there's a lower bound of one quarter so basically you cannot achieve IMMCs for threshold tampering functions for thresholds better than one quarter. Now, if such an interactive code actually does try to detect that it's being tampered with and just modify its structure on the fly which basically means that one party might decide to send less information to let the other party send more information then this lower bound no longer applies but there is a different lower bound by Gaffari et al and they show that there's a lower bound of two over seven which basically means that we still have a lower bound here and the obvious question is because non-malabilities obviously weaker than error correction, can we do better for IMMCs? And sadly the answer is no and this is basically your first result that the idea is to lift the lower bound from error correction to non-malability and to do that we have two key observations those are that the lower bounds for interactive coding they work even for inefficient interactive codes which means that these simulators do not have to be efficient machines they can for example run in exponential time if they want to and it also still works in the presence of a common reference string so the CRS does not help to get around those lower bounds so how does that help us? Well, if we have such a protocol there's such an encoding of a protocol then we know that because we say we are encoding some non-trivial protocol so the output say of Bob depends in some way on some information about Alice's input this means that there must be some message in this protocol that first reveals some non-trivial information about Alice's input why is that the case? Well, Bob needs to somehow extract the information he needs to compute his output from this transcript and even eavesdropper could not also extract this information well then you would have like an information theoretically bit transfer protocol which would basically give you an information theoretically secure key exchange which, well, you can't have so we know that if the non-trivial protocol is being encoded here there exists this first message where we learn some non-trivial information H of X and Y so what can we do with this idea? Well, we now know that these messages up here they are basically independent of the inputs of the two parties which means we can basically just shorten our simulators and say, okay, we just no longer exchange these messages the two parties need to agree on what those messages are but they don't need to be exchanged because they are independent of the actual inputs of course there's still a problem when you try to do this so what that means is that we just push this into the CRS the CRS can be sampled by just executing the protocol for a uniformly chosen input X and Y of course there's still a problem because while this does not depend on the inputs it may depend on some internal secret state of the two simulators however, because we are not requiring that these simulators are actually efficient, they can just use exponential time to just sample consistent internal states and once they have sampled these consistent internal states they can just continue running the protocol as if that had been what they were doing all along so what does this mean? this means that if you have an original INMC S0, S1 for some threshold T then this modified protocol which is basically the shorter protocol S0 prime and S1 prime this is also an INMC for some threshold T prime which must actually be greater or equal to T so in particular it's also an INMC for threshold T the problem is of course it's now inefficient so you couldn't actually use it and it's in the CRS model however we said that the lower bounds for interactive coding still apply in this case and it turns out that this protocol does also have to be error correcting because if it weren't error correcting that means that there exists some way in which a tampering function can cause an abort if the tampering function can cause an abort this means that we can define a tampering function that just looks at this information this non-trivial information that is revealed by this very first message and if this information is one for example causes the abort if it's not one, do not cause an abort this distribution that is caused by this tampering obviously depends on this non-trivial information of the inputs which means this distribution cannot be approximately sampled by, cannot approximately be sampled without knowledge of X and Y so this means that the lower bounds for interactive coding still apply even for interactive non-malable codes so that's unfortunate so the question is can we do better for other classes of tampering functions? and basically we look at three classes of tampering functions for each one we give an explicit protocol how to encode arbitrary protocols against these classes of tampering functions the first class we look at we call bounded state the bounded state tampering function basically the tampering function can still tamper arbitrarily but it can only depend on S bits of state so from tampering with one message to tampering with the next message it can only keep S bits of state so in particular this also means that it can only remember S bits of the previous messages or some function of the previous messages as long as they only encompass S bits and we can use basically this restriction on the tampering function to design INMCs the second class of tampering function we look at are so-called unbalanced split state tampering functions so split state tampering functions are actually the most studied class of tampering functions for regular non-malable codes where you basically say okay I take my code I split it into two parts and you can tamper with all of it but you need to tamper separately with the left part and the right part and for interactive non-malable codes we define a stronger class of tampering functions where basically it's still split into two parts in this case the tampering function can actually choose how to split it it can basically decide for each message of the protocol does it belong into the set I0 or in the set I1 and then the tampering function can tamper jointly on I0 and it can tamper jointly on I1 but it needs to tamper separately on the two sets and it's called an unbalanced split state because these sets do not need to be balanced so they don't have to have the same size one of them can be much larger than the other but we need to know a priori that there is some limit to how unbalanced they are so basically the smaller one of the two sets needs to contain at least a C fraction of the total number of messages but we can give protocols for arbitrary values of C it's just that we need to know it beforehand and the third class of tampering functions we call fragmented sliding window the sliding window tampering function would basically be a tampering function that can only depend on the last W messages so if your window size is W and this is called a fragmented sliding window because it doesn't have to be continuous so you do not have to depend on the last W messages you can depend on W messages but you can basically choose freely which ones of the W messages you want to remember and for all of these we basically give explicit protocols and all of those protocols follow a common approach basically the idea is that we want to run the protocol we're trying to encode under symmetric authenticated encryption because this is an information theoretic setting we need information theoretically secure authenticated encryption which is not a problem we can use a one-time pad and information theoretically secure max but we need a key for that which obviously it's a bit hard to come by but we can actually run kind of a key exchange now a key exchange is not possible but we need a key information theoretic sense however we can use the structural restrictions we put on the tampering functions to actually design key exchange phases such that for any future round the tampering function this key that is exchanged here is almost uniform from the point of view of the tampering function and as long as we can guarantee that this is actually pretty simple after we have the key exchange check that they actually agreed on the same key this key confirmation step is not actually necessary strictly speaking it just simplifies the proof because it gives us a single round we can focus on in our analysis and once you've done that you don't do the protocol execution which just as I said means we run the protocol as originally specified it's just that we encrypt each of the protocol message using a one-time pad and we map it with an information theoretically secure message authentication code this of course means that we need to actually exchange a lot of key material up here but that's not generally a problem so once we have this key exchange this rest is actually pretty simple so the focus is actually on those key exchange on this key exchange phase now I do not have enough time to actually go over all of the protocols I don't even have time to go over one of them with an actual proof but I want to look at one of the key exchange protocols for the bounded state temporary which is actually the simplest of the key exchange in this key exchange what it's based on is a two non-malable extractor so what happens in this key exchange is simply that both parties sample two random strings all of a certain length the length of those actually depends on the exact parameters of this extractor and also on the size s of the state that the tampering function can keep because basically these strings need to be long enough that the tampering function cannot remember enough of each one of the strings to actually extract any meaningful information from them so what happens is that those strings are simply sent alternatively so S0 sends alpha 1 then beta 1 is sent then alpha 2 then beta 2 and afterwards keys K1 and K2 are extracted from those alphas and those bitters and then the final key is just derived by X-oring the two keys and the idea is here that if this extractor is too non-malable then those keys K and K' remain almost uniform basically this is because we choose the parameters for the extractor so that the tampering function when it tries to tamper for example with the alphas cannot tamper with alpha 2 in a way that depends on too much information of alpha 1 and then you can see that if those keys are basically uniform if the tampering function tamper with the alpha 1 and the alpha 2 or at least one of them then you will get a uniform key over here that differs from the uniformly distributed key over here which means they will not agree on their key and the key conformation will fail and they will just which means they will just abort the protocol and basically this check whether the protocol aborts can be done without actually knowing the inputs because all of this can be simulated without knowledge of the inputs the key exchange phase, the key conformation phase does not depend on the inputs at all and the protocol execution phase under the condition that this key is actually close to uniform is also approximately simulatable without knowledge of the inputs so this distribution sampler DF can actually just pretend to run the protocol see what the tampering function does if the tampering function tries to tamper with any of the messages in the protocol it can just abort this will happen the real protocol as well and we have as I said three classes of tampering function we look at this is the simplest key exchange phase in the other key exchange phase we need a lot more rounds we use a special kind of secret sharing scheme where basically we can detect certain classes of tampering but the general idea is always the same we are trying to run a key exchange phase that is secure against a structurally limited class of tampering functions and once you have that you can easily get an INMC for this class of tampering function that I'd like to conclude and if there are any questions I don't know of this time but otherwise I'll be happy to answer them offline so we do have time for at least one question what are the rates that you are getting what are the rate of these goals rate the rate not very good it's not good enough that we cared about it even using a CRS sorry even after you are using a CRS you are still not getting good rate I'm sorry I have a bit of bad hearing so that's why I have no understanding so your protocol uses CRS no the protocol does not use a CRS no we only use a CRS in the impossibility result okay so let's thank Nils again