 Hi, welcome. This talk will be about improved classical and quantum algorithm for the subsetsome problem. I'm Mi-Shin-Sheng from University of Paris. This is a joint work with Xavier Pontan, Jaime Bericou and André Chauden-Laura. The subsetsome problem is the following. Given N integers A1, A2, and An and the target S, we want to find a subset of the integers that sums to the given target. The subset that we are looking for can be represented by a n-bit vector E of coefficients in 01, where each EI represents whether the corresponding EI belongs to the subset or not. More generally, we are interested in a modular version of the subsetsome problem. The decision version of this problem is well-known to be NP-complete, and there are three very different regimes, depending on the relationship between L and N. In this talk, we are interested in a case where L is roughly equal to N, because this is the hardest case. The other cases can be approached by different techniques and can be solved efficiently. The subsetsome problem is relevant for cryptography, because it is used as a hard problem for post-quantum cryptography, and it appears as a subroutine in the quantum-hidden-shaped algorithm used for isogenic-based analysis. Furthermore, the techniques that we will see also applies to other problems, such as generic decoding algorithms. In cryptography, we are interested in random instances, not just worst-case instances. Typically, we sample the weights and the targets uniformly at random as N-bit integers. Note that the modular is now also 2 to the power N, so we expect this to be a hard instance. All non-classical and quantum algorithms take exponential time. We are interested in improving the exponent in the time complexity. We further make the extra assumption that the Harming weights of the solution is N over two. In other words, we are looking for subsets containing half of the weights. This is without lose of generality, because it is known to be the hardest case. There have been various improvement in the exponents over the year. We were able to improve the best classical exponent to 0.283, although it's a very small improvement. But our main contribution is in the quantum setting. Before I can present it to you, you need to know more about quantum memory models. In order to obtain quantum speedup, we always need quantum random access to the data. That means be able to read all the data in superposition. However, we may or may not assume access to quantum writes. All previous subsets and algorithms assumed the stronger memory model called quantum memory with quantum random access, where quantum writes are allowed. But in reality, this model is not yet considered realistic. What is more realistic is the classical memory with quantum random access model, where only classical writes are possible. Our first result is an algorithm in this weaker model that is competitive with the state of the art. Even though they use the stronger memory model and the conjecture on the quantum work of the object that I will not talk about. So it is reassuring to see that we don't actually need quantum write. Our second result that I will not talk about in this talk is a faster quantum algorithm in the stronger memory model. We have two variants of the algorithms. One was the same conjecture and another one was out. In fact, we were also able to remove the use of the conjecture in all previous papers. And this also applies to the quantum information set decoding paper of Tillich and the Kashgar. Unfortunately, we were not able to remove the use of the conjecture in our best algorithm. If you are interested in this part of the results, then I invite you to read the full version of our paper on e-print. I will now briefly try to explain the classical approach. We would like to find the solution e of weight n over two. What we can do is to choose a random t and solve two easier subset some problems presented here. They are easier because they're having weights of the vector that we are looking for not n over four instead of n over two. And the module is smaller. It is now two to the power cn where c is a constant smaller than one. Whereas for the original problem, it was two to the power n. When we have two solutions, e1 and e2, we need to add them together and hope that the sum is now a solution to the original problem. For this to work, two known trivial conditions must hold. We know that the sum is a solution module to the power cn, but not necessary module to the power n. Furthermore, the weight of the sum is not always n over two. This is because e1 and e2 may have some ones on the same position. So when we add them together, it becomes a two. The idea is that if we have many different solutions, e1 and e2, we can hope to find one solution for the original subsets and problem. Therefore, our new strategy is not to just find one pair of solution, e1 and e2, but two lists of solutions to the easier subsets and problems. We call this process of putting together solutions, merging and filtering. When merged solutions with respect to a bigger module, and then we filter the results to keep those of expected heavy weights. If the lists, e1 and e2 are uniformly distributed, then the result of the merged is also uniformly distributed if the previous module is a divisor of the new module. We can also compute the expected number of merged solutions. However, it is not clear that the filtering process produces also a uniformly distributed list. Here, we make use of a standard heuristic in a field that says that the filtered list is indeed uniformly distributed. The algorithm introduced by Hohgrab Guahan and Zhu implemented this idea. It can be represented by a tree where each level can be seen as a step of merging and the filtering. Here, we introduced the notation dm alpha, which is the set of vectors of length m and of hammer weight alpha m. The top level of the tree is a little bit different. We can start with vectors where the first half or the second half of the coordinates are all zero and merge them together. Then the merged vectors have directly the required weight. We only need to verify the modular constraints. This is an easy way to construct a list of vectors which satisfy both a wave and a modular constraint. After the HGJ algorithm, Baker, Corrine, and Zhu improved it by using vectors with coefficients not only in zero one, but in minus one, zero one. In the last step of the tree, we still need to find the solution with half zero and half one. But now the solution can be decomposed into sums of vectors with coefficients in minus one, zero one. So there are many more ways to do that and this can improve the complexity due to the power zero point 291 m. In all results, we have two new ideas. The first one is that in the previous algorithms, the starting list on the top of the trees are sets of all possible vectors having a certain hammer weight. So if the cardinal of the set is large, then the time complexity can also be large. Our idea is to sample uniformly at random formats and choose the list size which optimize the running time. With this idea, we can actually improve the BCJ algorithm to 0.289. The second idea is a natural improvement of the BCJ algorithm. Why should we stop at using minus one, zero one? We can also allow coefficients of being two. This will further improve the number of the compositions of the solution vector. By combining these two ideas, we can improve the algorithm running time to 0.283. So now I'll talk about our quantum algorithm using classical memory with quantum random access. Suppose we have n elements. Only t of them are good and the rest are bad. How do we quickly find a good one? We assume that we can sample an element in x at random and test whether it is good or not. If we repeatedly sample until we find a good element, then we have built a sampling procedure for g. We expected that it will need n or t tries to succeed. Let's consider the following example with four good elements. We sample one element, it is bad, sample another one, still bad, yes, another bad one, and finally we succeed at the fourth trial. In the classical case, this is the best we can do if we can only sample and test. But in the quantum case, we can do much better. We only need square root of n over t queries roughly. But obviously this requires the sampling and test procedures to be quantum algorithms. The way it works is that we start with a uniform superposition over the search space. This means that if we measure now, we will get a random element. But what we can do instead is to use the test algorithm to amplify the probability of the good elements and reduce the one of the others. If we amplify only once, we can still get bad elements. But if we amplify roughly square root of n over t times, then we only get a superposition of good elements with high probability. Now I will talk about our quantum search algorithm using classical memory with quantum random access. In the classical case, we have two lists and we merge them with some modular constraints and we keep only those of required handling weights. A different point of view is to assume that one of the lists is given. So in this picture, the blue list is given and we assume that we have a sampling position for the red list. We then build a sampler for the result of the merge. The way it works is that we sample the list L1 and we search for the unmatched in L2. This is an instance of the classical search that we saw before and it gives the complexity on the right. After that, we filter the list L, which is another search instance. Now, if we have quantum samplers for the list L1, we can obtain a quadratic speedup for the two search steps. We obtain the formula on the right for the time complexity of these steps. Now the HGGA algorithm can be seen as the following. We first construct a sampler for L03, then we obtain a sampler for L02 and then for L01 and in the end, we obtain a sampler for the solution. Quantum search improves the sampling time of the list but there is one problem. When we merge, we sample from one of the lists and assume quantum random access to the other. That means that we need to first build the other lists classically. In particular, if they are as big as the list in the classical HGGA algorithm, then we will get no speedup. The solution is to make the tree unbalanced so that the classical lists are much smaller than the quantum ones. By optimizing the sizes of the lists, we obtain the exponent 0.2356. An important feature of our algorithm is that it only uses polynomial and number of qubits. To conclude, I will now show you again the results that we obtained in this paper in the classical and quantum settings. They are all the fastest among their own category. As you have surely noticed during the presentation, our algorithm in the weaker quantum memory model only used 01 as coefficients of the intermediate vectors in the tree. We actually tried to add minus one but the asymmetry of the tree made the formula more difficult to handle and our optimization program doesn't seem to converge. As a general observation, using more symbols for the coefficients improves the algorithms but the improvement becomes smaller and smaller as we continue to do so. One open question is thus, how far can we go with this method? Another open question is about the conjecture on the quantum work updates that we were not able to remove completely. We probably need to update MNRS quantum work framework to solve this problem. A third open problem is that since the subset problem and the general decoding problem share some similarities, we obtained better quantum algorithms for the general decoding problem, especially in the weaker quantum memory model. That will conclude my talk. Thank you for listening.