 So thank you for the introduction. Hansel of the State, what is very last talk. So I'm going to talk about a work I did normally in the course of our education artist. And this is a joint work with my brother, Dharam Khansalik and Marsha Raitar from India. So let's start with an example. Let's say you have this beautiful tie and you want to send it to your friend. But there is a crazy painter on the street who aren't really faced this time in Saqqaqa and often received on the other end is this ugly bag. And what is this going to do? We can follow this by saying that we have a message M and we're going to send it for an adversarial channel which applies this family function F. And what's received on the other end is this family message M star is F of L. And in general, we can think of this family function F coming from a glass of functions script F. And you can think of another thing that this painter has different buckets of different colors. So what can we do? So maybe we try to put our tie in the box and then send it and then move that when the painter paints again, only the box is painted and we can somehow remove the box and recover our nice looking tie aspects. So how do we do this? So now we are going to have a coding scheme. So we're going to take our message M, we are going to encode it to get a code word C and we're going to send this code word C and again the channel is going to apply this family function F and what is received is this C star which is F applied on C. And then we're going to decode this C star to get some kind of data. And in general, here I want to highlight that we are in the coding scheme. So there are no secrets encode and decode are completely public. They do not have any secrets at all. So in general, what do we expect from these things is that for F functions coming from certain classes, we want to guarantee that there is some nice relation between the decoded message and the version analysis. This is what we want to gain from this coding scheme. So what are some of the examples of these coding schemes? A very well known example of coding schemes is this error corrective course where the family function families that we consider are the functions which can modify at least or at max are half the code word symbols. And the desired property of the lines relation that we want is the decoded message is exactly identical to the original message. And this is what we gain from this is the correctness. But now consider that we have another analysis where you can potentially modify every single code word symbol. And obviously we do not get correctness in this case, but what we get is something called a normal entity. And the relation that we want to have here is that when we decode the time code word, what we are going to get is either the original message or something completely independent. And what I mean by independent here is that let's say what we have, the actor will not be able to tap on the code word in a certain way so that after decoding we get 10 plus one or some function of it. We either get the message that we started with or we get something completely unrelated to the message. That's what I mean by independent here. And this is our property here is the type of the lines. And I will talk more about this in a minute. So let's see at the variable. So non-linear codes were introduced as shall be called again by Jim Bosby, P.S. Raph and Wiggs in 2010. And in the original design, they also showed an impossible design where they said that we cannot achieve this kind of table as it is against every efficient function, every problem that tap in function. And why this group? Because we can have a simple attack of those deep codes of message and the deep codes can plus one and there is no way we can win against this type of set. So we have to somehow always have to restrict the tap in functions. And this line has been studied extensively in the split state model as we have seen. And there's a lot of work on how to restrict that and how to get stronger split state models or improve the efficiency and rate of such types of systems. Another natural way of restricting that at us is by restricting the complexity of the tap in function. So this was studied by Paul Stradale in 2014 where they considered the tap in functions which could be modern as circuits, circuit family of some bounded size. Then this is, then influence esteem myself at the same set of authors. We considered the tap in functions which could be modern as NC0 circuits. So NC0 circuits are constant depth circuits which have bounded family. So then this work was and so there is this line of work which is now going to understand how we can restrict the tap in functions in terms of the complexity of the tap in function. This work was further improved in our journey by a general identity where they gave a non-manual reports which were tamper resilience against AC0 circuits. So these are constant depth circuits with unbounded value. However, this consumption was inefficient in the sense that the old world was not following. In this work, we extend this line of work where we show that how we can construct non-manual reports against AC0 tap in functions. And we consider which can be models bounded depth decision based and space bounded streaming character. And then what these three models mean in general, they have inserted other results which either introduce some different variants of non-manual reports of the weaker definitions of different models that I want to talk about. So let's see. So our main result is that this paper provided a general framework for constructing non-manual reports for different tamper classes for which certain average space hardness results are known. So we are trying to leverage from the complexity theory results to construct non-manual reports. And in particular, we show the instantiate of our framework to give the first known efficient constructions against this tampering values, AC0 tapping, bounded depth decision based and space bounded streaming type. And these constructions also work in coding multiple rates. So let's look at these families in detail. So, yeah. So first time that we are going to consider is this bounded depth circuits. So we specifically consider the circuits which have depth up to a log in or log log in where 10 is the size of the port word. And this obviously concerned a constant depth circuit. So we get tamper resistance against AC0 circuits. The hard problem which we are going to rely on is the problem of parity. So we are going to rely on the fact that parity or XOR cannot be computed using AC0 circuit. And unfortunately, in certain cases, we need certain more assumptions. And the assumptions that we need to get the tamper resistance in this case is that we need a CIS model and we need a public and different scheme and simulations on NISIC with certain properties. This will be clear later on. Similarly, we also get tamper resistance against tampering functions which can be modeled as bounded depth decision trees up to depth of 10 to the epsilon where again N is the length of the port word. The hard problem that we need is again that of parity. Just one thing to notice is that our problem is information theoretic. There are no complex assumptions on this part. And we need the same complex assumptions for the bounded depth decision trees. However, another interesting case is that we can say that a tampering model and a tampering function can be seen as a space bounded stream kind of thing. So this can be seen as a tampering function which can be modeled as a branching program of fixed width. So this kind of model is that there are bounds on the space. And especially if the adversary has a space bound of n to the epsilon, then without any complex assumptions, just relying on the recent result by Rasql in 2016, which shows that learning parity is hard even for these kind of algorithms. The streaming algorithms with space bound is hard. We get non-valuable reports. So now I'm going to talk about some preliminary and then I'll give it over to you. So here it's actually taxed. As I said, we are going to be in CRS model. So we are going to have CRS gen algorithm which outputs a CRS. The encoding algorithm text is CRS. That is a common reference on a random screen or common reference screen. And a message M and randomness for encoding and output over to C. And the decoding algorithm is again going to take the CRS and the board world as input and output the message M. So here is a simpler definition of non-valuable reports. And this definition actually implies the definition. We actually achieve a much stronger definition than this and that implies the previous definition which was the standard non-valuable definition which was discussed in the previous talk. So consider this experiment. So we have a challenger which generates the CRS and gives it to the attacker. The attacker chooses a tampering function F from this family. The challenger then encodes some message and generates the board world C. It is going to apply this tampering function and get this tampering board world. And then it is going to decode the tampering board world to obtain this template now. And the experiment outputs template now. And the security guarantee that we want is that the view of this experiment is indistinguishable for any two messages M0 and M1 for all tampering functions F in this standard world. So now I'm going to talk a little bit about now-arriving double-acquisition paradigm. So this is just a review of a technique which allows us to get non-valuability in terms of encryption. So how does now-arriving double-acquisition work? So we start with a CPS in your PK system. We generate two pairs of public and secret keys. And we also generate a CRS and a trapdoor for a listening system. So this is the key gen. We output the public keys and the CRS as the public keys. And we output one of the secret keys as the secret key of the system. In order to encrypt a message, what we do is we encrypt a message using both public keys to get a cybertext. And we generate a music proof saying that the plain text underline these two cybertexts is same. And we output these two cybertexts and the proof as a cybertext. And finally, in order to decrypt, we basically verify if the proof is correct and if the proof verifies, then we decrypt one of the cybertexts in the secret key here. So just to remind, this is a review of the encryption scheme. I'm still not talking about our construction because we do not have a secret. So how does our conception go? So again, this is the double encryption. You have two cybertexts. We have the algorithm and the decryption underline secret key corresponding to the first cybertext. And we have a music proof claiming the consistency of the two cybertexts. In our scheme, we are going to replace this cybertext that we have with a string. And how do we select the string? So remember that our damning functions come from this class F, which has certain restrictions in terms of complexity. So then we are going to choose two distributions D0 and D1, which are hard to distinguish for all these complexity classes. These distributions can be distinguished by a polynomial time algorithm, but for all the functions or the algorithms in class F, it is hard to distinguish them. And then in order to encrypt the BD, we are basically going to draw a string X from distribution D0 and D1. And this is how we kind of go around the fact that we do not have any secrets and we do not need any secrets for coding. And we are going to have the second cybertext properties and this is proof claiming that this cybertext and the string are consistent with each other. So, as I said, from this point onwards, you can think of the damning function family F as the AC0 functions. So we are in this here as model and these distributions D0 and D1 can be just thought of as random strings with parity zero or one. And again, because parity is hard to compute in the serial, so these distributions are hard. We are going to need a public encryption scheme with certain properties and we are going to need information on this. One thing I want to note is that these public keys and encryption schemes and these schemes are not required to be secured against the polynomial time and time functions. And this is the reasons why in some cases we can get around this computation assumptions and actually get information there on this. So, let's see the proofs again. So, the double-edged equation, the Navarian proof, we first start with the cybertext and replace the proof with a simulated proof. We are going to do some exactly same thing. In the next step, we replace the cybertext which is never being decrypted in the real world with a random value. We are also going to replace the cybertext with a random value. Next step, in the Navarian scheme, the decryption or I can change this secret again. Remember, we do not have any secrets. So, this is the crucial step in the proof where we need to introduce an alternate decoding algorithm which allows us to switch from the real decoding to the alternate decoding. And I'm not going to give details for this, but you can check the paper for this. And this is the step that is technically challenging. And the final step, we are going to switch the hard distribution where we switch to the infinite cybertext. And why does this proof work? So, in order to show that this proof works, we need to show that we can compute parity using AC0 circuit. So, how do we go about this? So, basically, the proof strategy which I just showed you, when instantiated with the appropriate ingredients, ensures that the entire tampering experiment I showed in the definition can be simulated in a tampering class AC0 in this case. And what this means is that if the output changes when we start the encoding of zero instead of encoding of one, then we basically have an AC0 adversary who can distinguish between encoding of zero and one. This specifically means that we can distinguish between V0 and V1 using an AC0 circuit. And that means we can compute parity because these distributions are exactly the parameters. And that's the model itself. So, now I'm just going to talk briefly about how this can be seen as a generic framework to construct non-malleable boards. So, remember the properties that we needed for the hybrid, we needed the simulation of proofs, we needed semantic security and so on. So, the properties that we need are like this. So, given that we have these two hard distributions for the tampering class, and the public encryption scheme with the decryption in the tampering class, and in this scheme with verification in F, if we have simulation of proofs, just I just want to be noted that this is a slightly weaker motion simulation where we only need the guarantee against the tampering class instead of the polynomial attackers. And we can actually give the randomness of the verifier. If we have simulation of encryption, which is semantic security against the tampering class, if we have the simulation soundness, and we have this property that we called as hardness of alternate decoding, which means that when we switch from the real decoding to the alternate decoding, the distributions are still hard to distinguish for the tampering class. Then we get non-malleable boards against the tampering class, tampering class F. And this can be, this framework can be used also in the encoding or encoding multiple ways with slight more strengthening. So the first three conditions remain exactly the same. We only need to strengthen or boost the last condition, where now that we want to ensure that the alternate decoding algorithm decode prime when composed with any function F, any Boolean function F, which takes F inputs, still cannot distinguish between the hard distributions D, R, D, and D, but where F is the number of inputs that we want to have. So just to summarize, the whole idea behind the framework is that given a tampering class F and some all-American function G, if we know that for all functions in this class F, and the F and G have low correlation, then we can potentially construct non-malleable boards against this class F. And how we are going to do it is basically we are going to construct this distribution DB as the distributions over inputs X such that GX is equal to B for zero and one. And if these D zero and one are easy to sample and we have other computational assumptions, as I said, then we get efficient non-malleable boards. So the summary is we give a framework for constructing non-malleable boards from certain average case hard decision results in products that we're getting and we specifically instantiate it against these classes of tampering. Some of the important interesting questions is that can we have more such instantiations of our framework? Can we get rid of the computational assumptions as we could do it for the space-bound and streaming tampering functions? Or if we can get rid of the non-CRS model? Or even whether we can explore more interesting connections with the complexity theory where we can benefit from the advancements in that field? Thank you very much. Questions? Can you say a few words for your wife, please? Can you say one of the sentences about hard decision aspirations, how do you think? Particularly, you should be speaking for the AC. For AC zero, it's pretty simple because we just compute the tab of the verifiers computation and it's a constant up-circuit so you get verifications in AC zero, all right? I mean, what is the... Since we construct this equation in the construction... Sorry, I didn't know that. How do you assess your situation, something like that? Which do you think will help you? We use the result by psi growth again. The standard design we just basically compute the... Like, there is nothing patchy in the... We just use the standard and compute the tab so that the decoding can be... Yeah, verifications can be... So, when you remind the wife of disparity, the average case for disparity, are you reminded of... Also, do you need to apply it to the part of this? So, you think that's a problem. You don't assume that it's hard to compute parity. Because we basically just need to kind of show where to have these hard distributions from the hard problem you have. And as I said, we just select random strings with parity to do our parity when we import zero or whatever. Okay. Any other questions? Last chance this conference. Okay, thank you. Thank you.