 All right, welcome. This presentation will be about merging in quantum and k-xO and k-sum algorithms, and this is going to work with Maya and Aya Plizinten. So we're going to be interested in merging algorithms in a very broad range of applications. First I'm going to introduce the merging technique when there are many solutions for the problem, and then we're going to see that in the quantum setting, and then we're going to extend that to a single solution case. So all of the main topic is quantum algorithms. We're going to remain at a very high level, so please, you don't need to be a quantum expert. So originally the main idea was to solve these problems, the generalized birthday problem. We're going to consider that, we're going to consider oracle versions of these problems, where we are given oracle access from the n-bit to n-bit function h. And the first one is whether there are many solutions. We're just looking for k elements such that their image x0 to 0. And the second problem for us is the problem where there is a unique solution. So you restrict the inputs of h to n of our k bits and you're going to find a single solution table. And the xor that we're going to take can be replaced by modular additions everywhere, but it's just a bit more technical. So originally the problem was defined by using only, by querying only lists, and we're going to consider an oracle here. So there are many applications for this problem. Sometimes there is a direct reduction to the k-sum problem. So even if this doesn't give the best, necessarily the best algorithm, this, because there are dedicated ones, sometimes we have good time memory traders, for example. So for direct reduction we have subsetsum problem, parity check problem, hpn as well. And the multiple encryption problem is not directly reducible to k-sum, but there are similar algorithms that will apply to it. So there has been many works on this topic, but we're going to focus on the time complexity and make it to focus on the exponents. So we're going to forget about all the logarithmic improvements. So we actually need roughly two results. The first one is that the optimal query complexity is 2 to the power k, because if you make 2 to the power k queries to h, then you can build 2 to the n k-tuples and have k0 to 0 with high probability. And the second is the time complexity for general k in this many solution case, which is given by Wagner's algorithm on the next slide. And we're going to focus on that. So Wagner's algorithm works by using this idea of merging. In the paper of Wagner, this is computing the join operator. If you have two lists that contains elements, let's say outputs of the function h, then it's really easy and it's really efficient to compute the join of these lists. The set of pairs, that's, so of pairs from A1 time cell 2, is that partially collides on the first two bits. It's really efficient because you can just assume that both lists are sorted and go through both at the same time. And the time is necessary is basically the time to go through the input lists and the time to produce the output list. Wagner's algorithm uses this and applies this recursively. So it's a tree of merges. And it depends on log 2 of k because 2 to the log 2 of k is the biggest number, it's the biggest number of lists that we can merge in such a binary structure. I'm going to give an example. So we're going to take four lists and we're going to keep four lists right under the end of this presentation. Okay, so we start by building these four lists and querying to the other three elements. When I say elements, it's simply x and h of x. And then we're going to compute two joins. A1 join L2 and A3 join L4. So we merge into these two intermediate lists. And we merge on n over 3 bits. So we obtain lists which are of size 2 to the n over 3 as well on expectation. And then we merge another time. But in the end, we only want a single solution. And we're going to merge on 2 n over 3 bits. And that would be a single solution because we have two lists which are of size 2 to the n over 3. And we have 2 n over 3 bits that remain to be put to 0. So in terms of all the time complexity is 2 to the n over 3. If we had eight lists, it would be 2 to the n over 4. If we had 16 lists, it would be 2 to the n over 5 and so on. In the quantum setting, there is a similar query complexity. In that case, we're given quantum recall access to h to the function that produces elements. This is optimal query complexity which is 2 to the n over k plus 1 instead of n over k, which was given by Williamson's Panic. And for any k, there are previous algorithms that give you time complexities, such as n over 3 for 4 conditions instead of n over 2. And an exponent 1 over 2 plus log 2 of k instead of 1 over 1 plus log 2 of k for any k. And all these complexities, so these are algorithms that use quantum accessible memory. I'm going to focus on that, but there are also results without. But these are more simple to analyze. Okay. So in this previous work, we had a very similar curve as classically. And actually, the algorithms that were used were very similar. And the curve was decreasing step by step. And in this new framework that I'm going to present, we can reach better than that. And we can do something that doesn't happen classically, that is decreasing the time complexity at each new value of k, strictly, and having a much more smooth curve for these complexity exponents. So now I'm going to present a framework that enables to do that and give a few examples. Okay. So in order to perform these quantum merging steps, we're going to use quantum search. In order to present quantum search, we can start by presenting classical search. So classical search is where classical exhaustive search is where we have a search space. And we have good elements in this search space. And we know how to recognize the good elements. And we know how to sample an element from a search space at random. So if we're able to do that, then we can turn this sample from a search space into a sample from the good subspace only. And we just need to sample repeatedly from the search space and test all the time if we have a good element or not. And we need to repeat that depending on the number of good elements t among n, we need to repeat that n over t times. And so we're really going to see that as a function, like an operator that transforms a sampling from the search space into a sampling for the solution space. Because in the quantum setting, things are really similar, except that everything is quantum. If we have a quantum algorithm that samples from x and a quantum algorithm that tests if an element is good, and I don't even need to go into the details, then we can obtain a quantum algorithm that samples from the good subspace. And it does only need to repeat this sampling and testing operation, square root of n over t times instead of n over t. This is where these famous square root speedup for quantum search arrives. And we need to implement these functions as quantum algorithms, but that's all. And once we have done that, we'll also have a quantum algorithm that if we measure the results, then we can obtain one element from j. But we can also use this sampling inside another quantum search and we can pile up quantum searches the same way we would do for classical searches. This is an interesting feature. Okay, classical merging can be seen as a sampling procedure. Here is how. We want to create this list l from these two lists l1 and l2. But what we're going to do actually is to sample from the list l. In order to sample from the list l, we suppose that l2 has been created. It is stored somewhere. And we're going to sample elements from l1. And when we sample from l1, we try to match these elements against l2 and to find someone that has a matching prefix, so that we can put a new element in l. So there is a time to sample from l that depends on the time to sample from l1 and on the number of new zeros we want to obtain. And if we wanted to compute the full join, the full list l, then we simply can sample from it repeatedly. If we put that this idea in Wagner's algorithm, this means that we don't want to build the list l0, we want to sample from it. Okay, this doesn't change a lot because l0 has only one element anyway. So we want to sample from l0 once, hooray. But then what we're going to do is to suppose that l3 join l4 has been built and is stored somewhere. It is given. And we're going to sample from l0 by sampling from l1 join l2. And we need to sample from l1 join l2 to the number three times before we find someone in l0. And if we suppose again that l2 has been stored in memory and is somewhere, then we can do that by sampling from l1. So there is a pile of searches. And in the algorithm, we're going to sample from l1. And then this will enable us to sample from l1 join l2. And then this will enable us to sample from the final list. This doesn't change anything to the classical time complexity of Wagner's algorithm. This merely changes the amount of memory because now we're doing everything here by only storing two lists, l2 and like the two blue ones. But it's really important in the quantum setting. Because in the quantum setting, this whole right branch here is an exhaustive search that we can do quantumly. So this is the idea of quantum merging. The list that we put in blue here is going to remain exactly the same. It's a list that is built and stored in memory and is going to help us. And then we're going to sample from the list l1 and to use that in order to sample from the list l. And we just quantum search everywhere. So the time to sample from l has been reduced by a square root factor compared to the previous formula. So on this tree, this means that the time to compute the right branch, the time to sample this element of l0, is reduced by a square root factor. So instead of needing to sample the classical l1 to do the number three times, we need to sample it quantumly to do the number six times. So this is great, but it doesn't change anything to its time complexity because we still have these two blue intermediates which are of size to do the number three and which cost the same time to create. So what we need to do now is to adapt to this quantum capacities. We need to rebalance the tree. And in order to reoptimize, we're going to take smaller lists l2 and l3, join l4 and we're going to take a much bigger list l1, much bigger space to span using Grover's algorithm and quantum search at a much smaller space in the intermediate lists. So we're going to take l1 of size to do the number two and we're going to take the others of size to do the number four and then everything is balanced with the complexity to do the number four. In general, a k-exor problem can be decomposed in many ways because you can consider it as another problem which involves smaller k-primes or problems and so on. And we try to optimize the strategy over all the possible decompositions using the fact that at each time, a new list is going to be sampled using quantum searches that use intermediate lists, the blue ones in the example. And if you write down everything, then there is this space of possible emerging strategies to span and the exponents in the time complexities optimizing them is a linear problem. So you can find the best strategies using a mixed integer linear programming framework which is what we did. And this enabled us to find these emerging strategies that led to the complexities we saw in the graph. And then then we moved on to actual proofs for it. So we got this close formula for the complexity here in the case where there are many solutions. So among all the examples that are presented at the beginning, these examples were actually all for the single solution case. And sometimes we also have intermediate examples where there are not, there are a few solutions only. If there is a single solution to find, then we can try to merge as well. And we can try to do similarly. But we're going to run into troubles. For example, in the fourth list example that I'm going to keep, all merges are going to become trivial. Because if we don't do that, if we put a non-trivial prefix in the join operators here, then we were going to miss the solution. So in fact, we can't really merge. We're forced to do something trivial. Well, however, we're going to use another idea. We still want to merge because merging is a very efficient operation. Why not? And we still want to merge and use a non-empty prefix. But since we're going to miss the solution, we have to repeat the computation for all values of this prefix. And this is Schruppel and Chamier's four-list algorithm. And in general, this is the dissection technique of Tinier, Dugan, Makinner and Chamier of Kipto-12. Classically, this is really interesting to save memory, but we're going to see that quantum matrix also reduces the time complexity, which is a nice feature. So how does it work for Schruppel and Chamier's algorithm? We're taking a prefix of size n over 4 so that when we merge, we obtain intermediate lists of size 2 to the n over 4 as well. And then, well, there is only a solution with probability 2 to the minus n over 4, which means we have to repeat this for every value of s. The whole tree here costs only 2 to the n over 4 time. So the time complexity in total when we repeat for every s is 2 to the n over 2. But the memory complexity has decreased. It's not 2 to the n over 4. In a quantum setting, we're going to do the same thing. It's just going to be a little more complicated because the intermediate list, which was of size 2 to the n over 4, is a bit too large for us. But we're still going to keep this prefix s of size n over 4. And we're simply adding a new parameter i, which defines a choice of sublist of l3. So if we have taken some s and some i, we have a sublist here of size 2 to the n over 8, an arbitrary sublist. And we compute a join here which is of size 2 to the n over 8 as well. And here we have l2 of size 2 to the n over 4. And now we do a Grover search for l1 find an element here. And we try to find an element here. And this only happens with probability to the minus 3 n over 8. So we have to repeat all of this in a loop. If we look at the complexities that this gives, then while computing the join operator in time to the n over 8, we're doing the Grover search in time to the n over 8. And we are looping over all choices of sublist 4 by 3 and of prefix s. And this gives a complexity which is 5 n over 16, little below 2 to the n over 3. Why below 2 to the n over 3? Because n over 3 is what we would have obtained if we only tried to merge two lists of size 2 to the n over 2 basically. Quantumly. And this is really interesting. This means there is really an improvement in the time complexity to do that. Actually if you have a problem that can be cut in any number of lists. So if it's a K-list problem with a single solution, but for any K, and this is the case for the subset sum problem for example, the best time complexity that we could obtain with these methods is 2 to the 0.3 n, which is also smaller than n over 3. And it happens for K equals 5 or multiples of 5. Actually it turns out that in this case the complexity is really well balanced between computing this intermediate join and doing the Grover search here and doing a loop on the value of s. This advantage in time comes from the fact that in general the quantum advantage in merging is not quadratic. It's less than quadratic. But the advantage in loops is quadratic because we are using Grover's algorithm in the loops. Which means as we put more work into the loops, we do better in time than if we were only merging. And this is something that really doesn't happen. So if you want to compare with other algorithms that solve the same problem, there is the algorithm by Ambienis that solves the problem for two lists, basically the element distinctness problem, or the algorithm of Bershine, Jeffrey, Lange and Meurer solving the four list problem. And this algorithm actually obtained a time to the 0.3 n. It obtains this, but only for of course four lists or a multiple of four. And for every K, which is not a multiple of four, if we optimize with our exponents, we get a better exponent if optimized with our method. It's not a anymore, only for K, when K is a multiple of four, it's not the best. So to go back to the problems that I briefly mentioned at the beginning of the talk, we could obtain improved algorithms for basically all of them for the priority check problem, because this is a K list problem as well for the K encryption problem, because this is similar to a K list problem for the subset sum. If we use simply a K list algorithm to solve it, we can obtain the best time memory product, not the best time, because there are better dedicated algorithms, but these algorithms use much more memory than we do. And we could also put that in the system BKW algorithm of SLHAT at crypto 18, which is an algorithm that uses less memory than the BKW algorithm for LPN and LWE. And this improved all the quantum time memory trade-offs. So normal details about the framework and about the applications and full version of the paper, and also some code that is available that computes the best emerging strategies. And with that, thank you for your attention.