 So now we are ready for the next talk, that is Improved Security in 2003 by Ritam Baumik and Murilo Nandi, and unfortunately, both could not be here due to, as I understand, visa problems or something like that, which is why Afra Deep Mandal will give the talk for them, and since no one asked questions in the first two talks, I'm guessing everyone is saving the questions for him. Which one is the full strength? The next one, the whole thing is. Left, left, left, left, left. Okay. Okay, hello everybody. So the story goes like this. So last Saturday, I was preparing to come over here, and suddenly Mridhu gave me a call that can I present this talk. So I said, why not? If we had no idea of the paper, it would be really fun. So here we are. Hopefully the contents of the slide are simple enough that all of us can follow what's going on. So this talk is about Improved Security of OCB-3. So OCB-3 is Authenticated Encryption, and any Authenticated Encryption is a length extending cycle. It combines privacy and integrity. The Caesar Competition aims for making a portfolio of Authenticated Encryptions. And right now we are in the third round. Among all the candidates that are still in the competition, OCB-3 is one of them. And OCB-3 has also been standardized by RFC 7253 in 2014. So OCB-3 is a non-best Authenticated Encryption. So in a non-best Authenticated Encryption, in addition to the message, header or associated data, there is also nonce. So the guarantee is nonce is some non-repeating additional input. The advantage is that non-repeating feature helps us to design efficient Authenticated Encryption schemes. However, the disadvantage will have to do some nonce management. So this is the formal definition of any nonce-best Authenticated Encryption scheme. So basically it takes four inputs. Key, nonce, associated data and the message. And it outputs a ciphertext and attack. So for any nonce-best Authenticated Encryption scheme, first of all we need to have an attack, correct this condition. Which is basically if you are encrypted in it truthfully, and then you will get back the same message when doing the verified encryption. And as we have said in the nonce-repeating scenarios, two encryption queries cannot be made with the same nonce. And now as this is an Authenticated Encryption scheme, we have a privacy code which says for randomly chosen key and any non-associated data or the message, the adversary should not be able to distinguish the c from the output of a random injective function from it. So basically the ciphertext should look random. And we also have the Unfortability code which says, which basically says it's hard to forge this Authenticated Encryption scheme. Unless you have made that corresponding query, you won't be able to produce a ciphertext and attack such that that will pass the verification stage. And these two security code can be combined and stated as a combined security code in the different states that the ciphertext and the attack as a pair it should look random. So now let's see what is OCB-3. So OCB-3 was originally proposed by Rogay, Belaynab, Brack and Rovind in 2001, or the original OCB was proposed in 2001. Then we have a simple variant called OCB-2. And the recent version, the client OCB-3 was submitted to CISAR and also it was also published in FSC 2000. So in OCB-3, the key space is 128 bits. The non-space is 128 bits with a constraint that fast-122 bits are not all 0. So this constraint is kind of important and that will get used in the proof. And we got the associated data, which is strings of arbitrary length, the message, which is of arbitrary length, ciphertext, which can of course, the minimum length is 128 bits. And finally the tag. For the tag, the maximum length is 128 bits but if you want you can have a smaller tag. So this is how the nonce gets handled in OCB-3. So we have a stretch-dense-shift hash function. So for any 128-bit key kappa, we have a 6-bit input x. And this stretch-dense-shift hash function kind of behaves as a x or uniform hash function for when x is on the 6-bit. So these are the two property guaranteed. When k is x is 6-bit and k is sampled uniformly from 128 bits. And once we have that stretch-dense-shift hash function, this is how we process the nonce. So tn is the top 122 bits and bn is the bottom 6 bits. And at first we get a hash key from tn. So basically we are applying the rock cipher with last 6-bit 0. And then on this kn, we use this kn as a key. And we apply that key to that, using that key as kn, we use that stretch-dense-shift hash function and input as bn. So this is 6 bits, the bottom 6 bits and this kn is kind of random because it's a block-side problem. So this property was not present in the previous version of OCB and OCB-2. So this actually makes it efficient for 64 constitutive nonces. We can reuse kn, saving one block-side problem because this kn is then fixed for 64. And also we need to process the associated data. So this is done as follows. We break the associated and various blocks. And so these are the lambda 1l, various constant where l is the block-side problem. So it is a block-side problem on 0. And this is l is for the masking. And finally, if you process it like this, we get a false tag. So this is an authentication tag that we use in OCB-3. So this is how the OCB-3 looks like. We have the message. We have these data ones which are q plus lambda i l. So l, if you remember, it was the output of 0 on the block-side problem. And lambda i is some constant. And q, this part comes from the nonce. So this looks kind of simple. So for each message, we get the ciphertext. And we go on like this. So here we have c1 to cl. Finally, we have something called sister bar. Here we are padding. And for this one, this kind of special. So this one goes right over here. It doesn't go to this input. And finally, if we explore all of them, we get m tag. And from the m tag, we get, so here this auth is coming from the associated data. And finally, we get this tag. And if we want to get a smaller tag, we can chop this. And in the sister bar, not all the bits are important because final few bits we know, they are fixed one zero star. So if you want, we can save written data by chopping these. So the description is just opposite of the previous procedure. We just run everything in reverse. For this verification, in the verification, we are checking whether that tag is same as this t tag. And this verification can also be done instead of bringing it over here. You can do it over here. Which is very fine m prime tag is as tk inverse of t prime plus auth prime plus this data. So this is all good. So now we come to the main slides of this talk, the results. So what we know about the security box? So this was the original security box. When the encryption queries consist of sigma blocks and only one verification query is allowed, the forging advantage is order of sigma square by 2 to the power n plus 1 over 2 to the power tau. So this is the length of the tag. And if you have more verification queries, the straightforward extension is if you have q prime verification queries, straightforward extension would be just multiplied by q prime. So we get these results. However, it's possible to get, so now here this bound becomes q prime sigma square. So we can actually improve this by a slightly different analysis. If we assume the sigma prime is the total number of blocks in the verification queries, which is sigma square which is better than the previous one which was q sigma square. So now if we notice that this is actually a boundary bound in sigma sigma prime, which is the total number of blocks in the verification queries. Now the known privacy attacks are boundary bound in sigma, so this part of the bound can be matched. So this part of the bound is tight, but there are no known attacks for the boundary bound in sigma prime. So now we have the main contribution of this paper which shows the sigma prime part that can actually be improved by a reduced to q prime l max. So now this is no longer boundary bound in sigma prime. This shows that OCD3 can withstand beyond boundary bound verification queries as long as sigma is boundary bound and l max is reasonable. So l max is the length of a single verification query, the maximum length of a single verification query. So this is useful when the number of verification queries is limited, but many forging items can be made. So this can be some scenario called as known dataset. So looks like now we have enough time to go over the proof idea. So let's see our reports. So overall proof idea is kind of simple. So it's application of Pether's coefficient H technique. So we have all internal blocks in real world and simulate internal blocks in ideal world. We will mark out the bad cases in the ideal world sampling and carefully bound the bad qualities. So this is kind of how almost all the proofs in this period of the year works. So the first step is encryption query queries can go back. So any accidental collisions in the blockchain or this boundary is straightforward because over here we are not hoping to get something new, we are only aiming for bird demand. But this step gets treated because here we are going to get improved bound. So accidental for this bounding is treated since we are allowing the internal collisions. So now let's take a closer look what's really happening. So this is the last step. This is the last step in the or the final step in the verification. So this is the 10. We have this authentication. This gets passed through the block cipher and it gets added to this Q plus some constant. So the Q depends on the norms. This L is the masking key to remember and so masking key is just the block cipher column 0. And this outputs the m tag and then we can compare whether this m tag is same as summation m i. So this is the checks and if we compare that means verification is going to succeed which is bad because then we are doing a forgery attack is being made. So the main for our goal would be to bound this bad probability as something like Q L byte of the part Q L. So which will be found but maybe it's not. Something else we will see. So now we will show that this bad almost always result from at least two simultaneous tenuous collisions. So these those bad events give the beyond part the bound or the division query. So this quantity sigma square Q prime byte of the part Q L. So if we have two simultaneous collisions that means the bottom part of the bad is 3 part Q L because that's 2 block cipher blocks. And the top part is of this form because we have all together Q prime verification query that's why we have Q prime and for sigma square bound comes from every bad event so we have two collisions for each one of them we have sigma choices so that's why sigma square. And if we assume sigma is less than 3 by n by 2 then this can be stated as Q prime. So now since we are going into a little bit more details so here we are trying to so so here the encryption query we have MI and the output is CI and we say this part the input to the block cipher is called XI and output is YI and in the other way also given the verification query we have the CI prime that goes to MI prime and input to the inverse for block cipher call is YI prime and output is SI. And using this now we will try to see what really bad event looks like. So this XI prime is a trivially determined when CI prime is CI for an encryption query with same loss. So in that case MI prime is MI because that means basically this block cipher call has been determined by the previous queries and when it is not trivially determined one choice is it is being freshly sampled so in that case this output block cipher is completely free and only with probability 1 by 25 and the bad event can happen because we need to match that check cell. Another option is it can be determined with accidental function and where it is YI prime equals YG for high end GJs are different and maybe we have different non-sales or different positions. So we actually have four cases so case 0 is at least one output is fresh in that case we have seen the probability is 1 over 25 and the case 1 is all outputs trivially determined and in that case it is not valid for generating because that means the attacker is repeating the query and that is not a valid attack scenario and case 2 is exactly one output is non-trivial with accidental collision and case 3 is two or more outputs so this is what the case 1 looks like this is in the case 1 we have only one equation C1 prime plus delta 1 prime equals Cp plus delta p where if Y1 prime is colliding with Yp and we always have a second equation because the checks come needs to match so this shows we have two equations and they involve Kn and L and these are the two block cipher calls we have so Kn the block cipher calls are generated during nonce so when processing nonce and L is the masking so this is the block cipher call and output with input 0 and they are actually independent because if you remember we said that in the nonce first bits cannot be 0 so that is why they are independent block cipher calls and so there are at least two distant calls to the block cipher and 2 and so this quality can be at most 1 over 2 to the power 2 in the second bad surface we have two equations straight away and most often they are ranked 2 but there are certain degenerate cases where they collapse into single equations and those cases can be bound by certain multi-colonial external inputs on the external outputs and they can be shown to help to be at most in mag like so there can be some other batches of cases as well so collisions with the eq outputs from nonce processing collisions with eq outputs from associated data processing collisions with eq outputs from tag generations but if you analyze all those cases exhaustively finally you will get nice clear thank you if you have any questions just contact me