 Hello, I am Akshima, and this is a pre-talk for the paper Time-Space Bounds for Finding Collisions in Merkle Damgard-Hash Functions. This is a joint work with Seao Goh and Jipong Liu. As the long title of our paper suggests, we study collision resistance with pre-computation in Merkle Damgard-Hash Functions. More specifically, we studied resistance against bounded length collisions. A natural question to ask is why study the pre-computation model? Well, simply because this model captures stronger and more practical adversaries that can pre-learn something about the hash function and use that to launch better attacks. To understand what I mean by bounded length collisions, let me first present the Merkle Damgard construction. These constructions use compression function H that take fixed sized inputs and output strings of fixed length. The construction takes salt and messages input. The message is broken into blocks of fixed size. So for the example in this diagram, the message X is broken into B blocks, X1 through XB, and then H is applied iteratively on these blocks. The key parameters in our model would be S, which is the length of the pre-computed information, T, which is the number of queries made to the compression function H, and collisions that would have to be found have to be B blocks long. There have been some prior works that have studied this problem for MD-based hash functions, and we present the results of these prior works in this table. CDGS was the first paper to study this problem, and they found that the advantage could be bounded by ST square over N, and they also gave a matching attack. However, the attack that they found gave collisions that were ordered of T blocks long. So the ACDW paper proposed that it was more meaningful to study bounded length collision finding. The best attack they could find for B block collision finding achieved an advantage of STB over N, and they conjectured that it was the optimal attack. However, they could prove the conjecture only for a restricted class of adversaries. For any adversary in the pre-computation model, they could prove their conjecture only for two block collision finding. The follow-up work of GK paper, which is a paper that has been accepted at this conference itself, managed to prove the ACDW conjecture for any constant B. More precisely, they gave a bound that has a factor of polylog S exponentially large in B, which reduces to a polylog factor when B is a constant. However, this bound quickly becomes meaningless as B grows larger. The GK paper presented another bound of S to the 4T B square over N for any B and any adversary. Comparing their bound to our bound of STB over N times max of 1 and ST square over N, it is clear that their bound is looser than ours. In fact, the GK bound can be worse than ST square over N bound of the CDGS paper when SQB square is greater than T. In comparison, our bound would always be better than ST square over N bound of the CDGS paper. Let's look at our bound more closely. So what does our bound mean? It means that whenever ST square over N is less than 1, our bound is STB over N and this proves the conjecture of ACDW. Whereas for the parameter range when ST square over N is greater than 1, our bound would be ST square over N times STB over N, which is still better than ST square over N. This shows that finding bounded length collisions is harder than finding collisions where there is no restriction on the length. To learn more about the new techniques we used to achieve better bounds, please attend our talk at crypto. This is when and where our talk will be. Hope to see you there. Thank you.