 So I'm Abhijith Dutta, and I'm going to present the paper, Encrypt or Decrypt, to make a single KBBB secure non-less Mac. It's a joint work with Nilanjandrata, Mithunlandi, and Kani Suda. So informally, a message authentication code, or a Mac, is a symmetric algorithm that ensures the integrity of the message. In specific, if Alice wants to send a message to Bob, and they do not care about the confidentiality of the message, but about its integrity, then Alice and Bob first share a secret key K through some secure key exchange in protocol. Then Alice applies the Mac algorithm on the key K and the message M that it wants to transmit, and generate the tag T, and then send the message and tag pair to Bob. Upon receiving it, Bob verifies whether the tag is valid or not by applying a verification algorithm on the key K, the message M, and the tag T. The security notion of a Mac says that after looking at sufficient number of samples of valid message tag pairs, no computationally bounded adversary can forge a valid message tag pair with significant probability. Now there are two types of Mac. One is stateful Mac, another is stateless Mac. Non-less Mac is an example of a stateful Mac. Non-less Mac is a Mac where the Mac algorithm, apart from key K and the message M, takes an additional input which is called nonce, and the verification algorithm as well, it takes the nonce N as an additional input with the key K, the message M, and the tag T. Now there are two types of security notions of a non-less Mac. One is non-respecting security in which the adversary is not allowed to repeat any nonces while making its Mac query, and another is non-smissive security where the adversary is not abide by this rule. However, the adversary can repeat nonces while making the verification query. The first non-smissed Mac was proposed by Wegman and Carter in 1981, and their construction is known as the Wegman Carter Mac. So in Wegman Carter Mac, the hash value of a message is masked with a random stream. But the limitation of this scheme is each time you authenticate a message, it needs to generate the random stream fresh. So one possibility is to just introduce a pseudo random function and apply a nonce over it so that the pad that is output from this function FK, it is solved with the hash value of the message to generate the tag T. Well it gives a beyond birthday or in kind of optimal security in nonce respecting setting. So it gives some order of epsilon QV security where epsilon is the differential probability of the underlying hash function and QV is the number of verification attempt of an adversary, but it has no nonce-missive security. If you just repeat the nonce for once, then it has no security at all. But in practice, it is sometimes difficult to maintain the uniqueness of nonce. So we want to have a scheme so that it have certain security when nonce repeats. In crypto 2016, Cogliate and Searing come up with a construction by in which they encrypt the output of the Wegman-Carter map and hence the construction is known as the encrypted Wegman-Carter. This construction gives the same security in the nonce respecting setting as to rather of Wegman-Carter map, but additionally it gives a birthday bound or a in by two bit security in nonce-missive setting. But there are few candidates, practical candidates of pseudo random functions, say one may want to replace this FK by some pseudo random permutation or block cipher, but once you replace the FK with EK, the pseudo random function towards pseudo random permutation, then the nonce respecting security of the eventual construction goes to birthday bound. So we are looking to construct a max scheme which gives a beyond birthday bound security and nonce respecting setting. So what can we do? So how to instantiate this FK? So one popular approach is to instantiate this FK by a popular sum of permutation function. So this FK is replaced by this sum of permutation function. And we know that the sum of permutation function is optimally secured PRF and hence the construction gives and of optimal security. But this scheme requires three block cipher calls. So the question is can we reuse the number of block cipher calls? The answer is yes. In crypto 2016, Cogliate and Syedin proposed the EWCDM or encrypted Wegman Carter with Davis-Meyer Mac. It's a nonce based Mac where the FK, the pseudo random function is instantiated by a key Davis-Meyer function. And it gives two N by three bit max security in nonce respecting setting and N by two bit security in nonce misuse setting where N is the block size of the underlying block cipher. In the same paper, authors conjectured that EWCDM is secure up to N bit in nonce respecting setting. And the single keyed EWCDM where this key and key primary cool is also it's also beyond the bound secure against nonce respecting adversaries. In crypto 2017, many can needs prove the optimal PRF security of the construction and their proof was essentially relied on patterns mirror theory technique. And the N bit security proof of patterns mirror theory is extremely hard to verify. In fact, in DCC 2018, Cogliate and Syedin proved the beyond birth the bound PRF security of single keyed encrypted Davis-Meyer construction and there they have acknowledged the difficulty of proving the beyond birth the bound security of single keyed EWCDM. Here comes the motivation of our construction to construct the decrypted Wegman Carter with Davis-Meyer construction or in short DWCDM. So the rest of the talk is organized as follows. So first we talk about the specification of our construction followed by the necessity of our non-space reduction. Then we talk about mirror theory and the extended mirror theory which is a useful tool to prove the security of our construction. Then we give an overview of the security proof of our construction and we finally conclude by giving a glimpse of a pure single keyed variant of DWCDM constructions known as a one key DWCDM. So let's begin. So DWCDM construction is pretty much similar to EWCDM construction. The only part where we have changed that the second block cipher call is replaced as its decryption call. So as you can see that it's a single keyed non-space MAC, but the non-space is 2n by 3 bits. That means the remaining n by 3 bits are set to zero. Why do we need this restriction? We will come to that. And we have obtained 2n by 3 bit MAC security in non-respecting setting and n by 2 bit security in non-smissive setting. Well, there are a couple of assumptions on the underlying hash function H. First of all, it has to be regular. That means for any x and for any y the probability of Hx equals to y is negligible. It has to be almost universal and it has to be C-way regular. That means for any distinct x1, x2, and x3 for any non-zero y, the probability Hx1 plus Hx2 plus Hx3 equals to y is negligible. So why do we reduce the non-space? So we show that if we take the full n-bit non-space then we will eventually land up to a bird-the-bound 4G attack. So suppose an adversary fix a message m and the nonce is say x1. Now it queries to the macroecal with the nonce x1 and the message m, it obtains the attack x2. Now in the second query, it sets the nonce as a previous response. That means in the second query, it makes the query with nonce x2 and the message m, it obtains x3. Similarly, in this way, he continues. Now, if he obtains the sum of x3 plus x4 plus x5 plus x6 equals to zero, then he can make a valid forgery attempt by setting the nonce as x3, the message as m and the tag as x3. So in general, if xi plus xi plus 1 plus dot dot dot xj is my zero, then one can come up with a valid forging tuple, which is xj, xj, m and xi. So now the second phase of my talk is to talk about the Pattern-Smiddler theory. So let us consider system of q equations, pn1 plus pt1 equals to lambda1, pn2 plus pt2 equals to lambda2 and so on and so forth, pnq plus ptq equals to lambdaq, where let us assume that this lambda is values are nonzero. And phi is a surjective index mapping function which maps the set of two q many indexes to a set one to r. Now when we apply this phi function, we eventually get a reduced system of equation, which is p5n1 plus p5t1 equals to lambda1, p5n2 plus p5t2 equals to lambda2 and so on and so forth, p5nq plus p5t2 equals to lambdaq. And this system of q equations is over our many variables. And the goal of Mira theory is to lower bound the number of solutions to p such that p0 equals to pb for any distinct a and b which belongs to the set one to r. So the general setup of Mira theory is that we have r distinct unknowns, we have q many system of equations, and we have an index mapping function. So and one can view this system of equation as a form of graph, where the node of the graph is essentially the nodes of the variables of the variables of the equivalent system of, reduced system of equation. And we are interested in specifically two types of graph. One is the circle, that means the equations forms a circle. And another is a degenerative graph. So you see when we have a circle graph, then the corresponding equation system of equations has no full rank. Whereas in case of degeneracy, we have say phi of n1 plus phi of t1 equals to lambda1, phi of t2 plus phi of t3 plus phi of n2 equals to lambda2, and phi of n3 and phi of t3 equals to lambda1 plus lambda2. So if you just combine them up, if you just linearly mix them, then you will eventually lead with p of p phi of n1 equals to p phi of n2. And we do not, this kind of graph is not desired for Mira theory. And the main theorem of the Mira theory result says that if my graph doesn't contain a circle and it is non-degenerate, that means all the equations is good, then for a fixed phi and for a fixed tuple of lambda, the number of distinct solution is at least two power n falling factorial r by two power of n cube, where the maximum component size, which is denoted as xi max, satisfies the following constraint that xi max minus one whole square into r is less than two power n over 67. The proof of Mira theory is an inductive proof, induction takes place on the number of components. And the proof is verifiable up to 3n by 4 security, beyond that it is extremely hard to verify. And by definition, Mira theory deals with a general system of equation and non-equation, but till now the treatment of non-equation has nowhere been found. So the goal of this extended Mira theory is to incorporate the affine non-equations along with the affine equations and is to lower bound the distinct number of solutions of a system of bivariate affine equations with bivariate affine non-equations. So the general setup of extended Mira theory is pretty much the same to the Mira theory, but we have this non-equations part. So we have R many variables and we have Q many system of equations and V many system of non-equations and this phi is a surjective index mapping function which maps 2 into Q plus V many indexes to a set 1 to R. So here we again can view the system of equation and non-equation as a form of graph. We again have two types of graph, circle and degeneracy, but along with that we have another type of graph which is called the degeneracy of type 2. So see the system of equation here is my p phi n 1 plus p phi t 1 equals to lambda 1, p phi n 2 plus p phi t 2 equals to lambda 2 and p phi n 3 plus p phi t 3 not equals to lambda 1 plus lambda 2. Now when we plot, I mean when we view this equation as a form of graph, so p phi n 1 plus p phi t 1 equals to lambda 1, p phi n 2 plus p phi t 2 equals to lambda 2 and my p phi n 3 equals to p phi n 1 and p phi t 3 equals to p phi t 2. That means the resultant system of equation and non-equation becomes inconsistent, right? There is no solution of this system of equations. So we do not want this type of graph as well when we incorporate the affine non-equations in our system and the main result of extended mirror theory that we have proven in our paper that if my graph is circle free and non-degenerate of type 1 and type 2 for a fixed surjective index mapping function phi and a triple lambda prime, then the distinct number of solution with the maximum component size is restricted up to 3 is at least 2 power n falling factorial 3 cube by 2 over 2 power n cube times 1 minus 5 cube cube by 2 power 2 n minus v by 2 power n where v is my number of affine non-equations. So now in the third phase of the talk, I'll just give a brief overview of our security proof of our construction and we have used the H coefficient technique to prove the security of our construction. So adversary is interacting either with the real world and the ideal world and the real world is comprised of two oracles. One is the MAC and the other is the verification oracle and the ideal world is consisting of two oracles. One is the random oracle and the other is the reject oracle. So when adversary is making query nonsense message to the random oracle, it just gives a random value as a tag and when it makes query to the reject oracle with the nonce message and tag, it always reject. And the advantage of the adversary A is defined as the probability difference that A outputs one when it's interacting with the real world and A outputs one when it's interacting with the ideal world. After the interaction is over, the adversary obtains a transcript which is tau is the union of the MAC transcript tau M and the verification transcript tau V. We denote XRE as the probability distribution of obtaining a transcript tau in the real world and XID as the probability distribution of a transcript tau in the ideal world and Calv is a set of all attainable transcripts which is partitioned into two states, good set of transcript, good T and the bad set of transcript, bad T. And the main theorem of H coefficient technique says that if there exists two positive parameters, say epsilon ratio and epsilon bad, such that for all good transcript tau, the ratio of the probability of obtaining that transcript tau in the real world to the two, the obtaining of the transcript in the ideal world is at least one minus epsilon ratio and the probability of obtaining a transcript tau in the probability of a transcript is a bad is upper bounded by epsilon bad, then the advantage is bounded by epsilon ratio plus epsilon bad. So, when interacting with the system, with the real oracle, the eventual system of equations will be something like this. So, in the left hand side, these are the MAC equations and the right hand side, these are the verification equations where lambda i is a ni plus h of mi and lambda i prime is ni prime plus h of mi prime. So, from this system of equations, we want to characterize certain bad events and we will characterize the bad events keeping in mind the bad event of the corresponding extended mirror theory system. So, here is my bad event that all the lambda i should be zero, I mean if you can recall that we have a system of equations where we had this lambda i's and we require that this lambda i should be zero. The other is lambda i is equal to lambda j and t i equal to t j, which is basically a degeneracy graph of type one, ni equal to t j and lambda i equals to lambda, sorry. Ni equal to t j and lambda i equals to lambda j, which is again degeneracy of type one and another is t i equals to zero. And in the right hand side, we have the following bound of the corresponding event that the first event is bounded by q m epsilon rig is a regular advantage of the underlying hash function. The second one is bounded by q m square epsilon a x u by two power n, that's pretty clear because this lambda i and lambda j, this comes from the differential probability and t i equals to t j, it's coming from, I mean for t i equal to t j, it is two power n. And for the bound of c three, it is q m epsilon a x u by two power n three and for t i equals to t j to q m over two power n. So, are we done? No, not yet. So, we have to also bound the component size of the mag graph. So, we have three types of mag graph in the component size and we also have to bound the circle in the mag graph as well. Here we need to deal with the two types of circle because if we have cycles of size three, then initially it leads down to the component size of mag graph three. And the bound for both of these events is q m over two power two n by three. We are also not done. But we also have to bound the circle of the verification graph as well. So, here we need to bound the cycles of length two and the cycle of length three of the verification graph and we do not need to go beyond of that because if you go beyond of that, then essentially it will be lead down to the component size of mag graph with three. And the probability of this event is the maximum of this two q b epsilon three rig where epsilon three rig is basically the three regular advantage of the underlying has function and two q b epsilon a x u, q b epsilon rig and q m over two power two n by three. So, in summary, so we have the back probability which is of the order of q m over two power two n by three and if this bad events do not happen, then we have a nice system of equations. We have a nice graph that doesn't contain any circle that doesn't have this degeneracy of type one and type two and from the extended middle theory we have the following bound that five q q over two power two n plus q v over two power n where q v is the number of non equations, I find non equations. And here, hence by applying the H coefficient technique we essentially have the advantage of the map is bounded by q m over two power two n by three plus q v over two power n, okay. So finally, we have also shown a pure variant of DWCDM, pure single kid variant of DWCDM where we derive the hash key as a block cipher output of a fixed string. So the fixed string is say zero power n minus one one and this is the and the hash key is derived as a block cipher output of this fixed string. The security proof of the one kid DWCDM is pretty much similar to that of DWCDM and we provide the same level of security of DWCDM and there's a future work. We are hopeful that DWCDM can be proven secured up to three n by four with non space n minus one bit and we are currently working on this and we hope to finish it as soon as possible. Thank you.