 Hello. Welcome to the Post-Quantum Cryptography 2 session. We have three talks in this session and the first talk is by Wessel von Werden from CWI Amsterdam. Yeah, okay. Thank you for the introduction. So I'm very happy that our paper made it here, even though the title is a bit long and we had this negative correlation with acceptance rate. But I'm very excited to tell you about all these keywords that appear in the title. So let's start with some motivation. So if we look at the current landscape of let's-based crypto, it's almost always based on either the LW assumptions or CIS or NTRU or related assumptions. And the let's that you get, while they are very versatile, you can do a lot of crypto with but they have rather poor decoding properties. And at the same time, many wonderful let's exist with great geometric properties. And of course, the question is then, can we use these in cryptography? And our contributions are a general identification, encryption and signature scheme based on the let's-based vision problem. And with general, I mean that you can put any letters in there with certain properties and you obtain such a scheme. And the better the properties of these letters, the better the efficiency and security of the eventual scheme will be. Okay, so let's get to the definitions. So what's a lattice? Well, given any basis, you can view a lattice as all the integer combinations of these space elements. And I was talking about geometric properties. So let's look at a few of these. So probably most famous one is the first minimum of a lattice. So this is the minimum length of any non-zero lattice vector. And alternatively, you can view this as the minimum distance between any two distinct lattice points. And secondly, we have the determinant of a lattice. So if you look at any fundamental area of the lattice up to this translation of the lattice, then this area has some volume and this is always the same and we got the determinant of the lattice. So these are two geometric properties of a lattice and they are related in the following way by Minkowski's theorem that says that this first minimum is upper bounded by some quantity that depends on this normalized determinant of the lattice. So you can think of it as about square root of n times this normalized determinant up to constants. And to do cryptography with a lattice, we need some hard problems. So again, the most famous one is probably the shortest vector problem. Where given any basis of a lattice, you are asked to find such a non-zero shortest vector of the lattice. And secondly, kind of the inhomogeneous version of this problem is the bounded disk decoding problem. So here you're given a target that lies at the most distance row from the lattice. And the problem asks you to recover this lattice point. And these two problems are in general very hard to solve. And the best arguments that we have are exponential in the dimension. So in dimension two, it all seems easy, but trust me, it becomes a lot harder in much higher dimensions. But if we look at the concrete hardness of these problems, then what it actually also depends on is the gap between what this Minkowski bound says and what the actual, for example, for the short vector problem, if you have a first minimum that's much shorter than what this Minkowski bound says, then it's also much easier to recover this factor. So that's kind of what the state-of-the-art crypto analysis currently says. And similar for this bounded disk decoding problem, if this row is far away from this Minkowski upper bound, so if this row is very small, then your target lies much closer to the lattice and it's also easier to recover this closest lattice point. So the concrete hardness really depends on these gaps. Okay, so how would you do encryption in general with lattices? And for the sake of this talk, let's call this the legacy approach, how it's been done currently. So what you have is one party has some good basis that is secret of a lattice. Well, publicly, you give everyone some bad basis of the lattice. And with good, I mean that these factors are somewhat short and orthogonal. Well, with bad, I mean that these factors are somewhat long and not so orthogonal. And note that any of these two bases differ from each other by a unimolar transformation. And because this short factor problem is hard, it's hard to go from this bad basis to this good basis. And what all these assumptions like Andrew give you is a way to generate such a random lattice along with such a good basis. Okay, so what can we do with a basis? Well, we have the bias nearest plane algorithm. And this is a decoding algorithm. So this algorithm solves this decoding problem up to some, well, depending on this gap, it can solve it or not. So what it does, this basis gives some kind of fundamental area. And any target that lies in one of these boxes gets decoded to the center of this box. And you see that these fundamental areas differ very much if you have a good basis or a bad basis. So how do you use this to do encryption? Well, we have a public base of our lattice. So we can take any message and kind of encode it as a lattice point. And to then encrypt it, it's very easy to just add a small error. And this gives you a target that you can just send over the wire. And because it's bound to this decoding problem, it's hard, you can't just decode it. But using this good basis, if you do the by decoding algorithm, you do recover this message as long as this error is small enough. But if you look at the same time, what happens with this, this public basis, then we see that we decode to a different message. And in higher dimensions, the property that he decodes to the correct message is negligible. Okay, so this is basically the idea of doing crypto currently. But there is some problem. So this by by decoding that's being used or even simpler methods, they all have this decoding gap that's at least a factor squared and away from what you would optimally might be able to do. And as a result, we can break these schemes by solve like by running lattice reduction algorithms like big AC. And you only have to solve the short factor problem in the average case in dimension about n over two, if you're to let us as dimension n to break, you only need to solve SAP in dimension and over two. And for a complete example, for some of these NIST schemes, if n is for example, 1024, then to break it, you only need to solve SAP in dimension about 450. I mean, it's still very hard. But what this implies is that we have to use lessons that are are much higher dimensional. And as a result, also, much less efficient and also in terms of time, but also in space. So can we do better than this squared and well, yes, we can. For example, this is the prime lattice that was already in the core review scheme in 1988. And if you look at this lattice and close enough, then you can see that you can do efficient decoding in this lattice just by some simple trial division. And even if you pick all these parameters right, then this gap is in square root of n. No, you can get it down all the way to log of n. And this means that that's just directly attacking. Like this, this lattice might enable us to, to get a scheme where n is, let's say a thousand and to break it, we need beta to also be close to a thousand. You get a lot more security using the same dimension. But now we have a problem. So if we want to do the same strategy of running encryption, we have our, let's say our good lattice on one side, but it's not secret anymore. Like we picked some very special lattice and everyone knows it. So even if you give some bad basis off to the public, like everyone knows, you're using that lattice. So what have people tried before to kind of hide this good lattice? Well, you could think of like maybe introducing a little bit of randomness in this thing, or you can maybe permute the coefficients a bit. But all of these methods have also been, been broken by abusing this, this kind of ad hoc construction. And so what's the, what, what can we do if we want to, to keep the geometry of the lattice, but, but hide it? Well, there's only one thing you can actually do if you want to keep the geometry correct, and that is to apply an isometry to this lattice. And for let's just the, the full group of isometries is not just permutations of coefficients. No, you can apply any orthonormal transformation to it. So what you propose just to, you pick your, your good lattice and you apply a uniform orthonormal transformation to it, which basically means that you somehow like rotate your, your lattice. And now to do encryption, you just get some messages on the right. You use this orthonormal transformation to, which is secret, which is your secret key, to move to the, to the good side. There you know how to decode and then you move back. So what is the key recovery problem here? Like this, this recovering is orthonormal transformation. Well, that's exactly the lattice isomorphism problem. And this problem has already been studied, I think, since the, since people begins studying let's us. And so given any two let's us that are isomorphic, it asks you to find this orthonormal transformation, mapping one to the other. And in terms of bases, what this means is that you need to find this orthonormal transformation, but also this unimolar transformation, such that B prime is equal to O times B times U. And kind of the presence of these two transformations, both as O and as U, is what seems to make the problem hard. So let's zoom in on this problem. In one way, you can view it as the, the lattice analog of the very original McLeese scheme, where you have some generator matrix G of a code in which you can decode very well. And to hide it, you apply a, a basis transformation S over the finite field, and you apply some permutation, some arbitrary, like random permutation to it. And note that permutations under this, in the world of codes with the hamming metric, permutations are exactly all these isometries. But if you move to the, to the world of lattices, you get way more, you get this, all this orthonormal transformations. And similar with the O and Finneger scheme, where you also hide some, some product map that you, that, in which you can easily find a pre-image and you hide it by also applying some, a fine, invertible map. And what's important about this problem is that it has been, been studied quite a lot, not by cryptographers, but mostly just by, mostly just by, by mathematicians, I suppose. And the best known attack to date also always requires you to solve SVP first. So you really need to find the shortest, shortest factors first to recover this iso one string. And that sounds like something that we, we want. So that's why we want to use this as a foundation for doing this lattice based crypto. Okay. But we have two challenges that, that remain. First, we have this orthonormal transformation. And this use real values. And in practice, real values are not, yeah, not, not really possible. And on the, on the other side, we have this unimolar transformation. And we need some, some solid way to sample this unimolar transformation. And without, with, without, with, by making sure that there's not in some sense of weak instance, like there are examples where you sample this U in kind of a naive way. And then just running out a little recovers you and brings you back from this B prime to, to B. So that's not, we have to get a solid foundation for this. And the best thing we can do is just make sure that this B prime, we want to sample this U in such a way that this B prime is completely independent of B. Except, I mean, it's in the same, it's isomorphic, but it should not, this, this U assembled in such a way that B prime doesn't, cannot leak any information about, about B. And as a result, also not leak any information about U. Okay. So these are the two problems. So let's look at the first one, this orthonormal transformation. And if we look at the, the literature on the Let's Isomorphism problem, then this is actually already solved. Name you what we do. We don't work with these bases, but we move to the ground matrix of, of a basis. So a ground matrix of a basis is just the, the matrix all pairwise inner products between these basis factors. And if we look at the ground matrix of this B prime basis, then we see that this orthonormal transformation cancels out in the middle. And we get an equation that only depends on the, on the ground matrices of these two bases. So if we define Q as this, this ground matrix, what we actually do is kind of move from the world of, of Lettuce to the world of positive definite quadratic forms. And how you can view this kind of on a high level is that you keep all the geometry, like we keep all these inner products, but we forget about the particular embedding of, of the Lettuce. So a quadratic form really presents some lattice bases up to all these rotations, which means that we can just forget about this, this, these rotations. And this allows us to fully restate the Let's Isomorphism problem in the following way. We need to find some, so given these ground matrices, you want to find some unimolar transformation just as Q prime is equal to U times Q times U transpose times Q times U. And if we now also restrict this Q to just, we only work with, let's say, integer quadratic forms, then this whole problem only uses integers. And we got rid of all floating points or real numbers. Okay, now we have one problem left is how do we sample this unimolar transformation? And in fact, we kind of solve this in another way. We don't focus on this U, but what we focus on is how do we sample this, this Q prime? So how do we sample our bad public bases? So we have to sample from the equivalence class, of course. So we denote that by this bracket notation. And what we did in our paper is we defined a distribution over this class. And of course, just to find this distribution is not, not enough. We also give an efficient sampler that given any representative of this class gives you a sample Q prime from this class, but together with this unimolar transformation. And so we have this parameter sigma, by the way. So it's only efficient if this parameter sigma is large enough. And this parameter sigma is called kind of drive, like how bad is your public basis? But what's important is that this Q prime only depends on the class. And not on, on Q itself. So if you get this Q prime, it does, it cannot leak in any way information about this, this unimolar transformation. And automatically what this gives us is an average case definition of, of lip, of the lip problem. Instead of like getting any quadratic forms, find this unimolar transformation, we can say, okay, sample one or like sample two of these forms from the distribution. And then try to find this, this unimolar transformation. And given this sampler, we can actually give a zero knowledge proof of knowledge scheme of knowing of like knowing the existence of a unimolar transformation. And that would go to the details there, but this immediately implies that you have an identification scheme. But maybe more importantly, we obtain a worst case or average case reduction over this class Q. So if we pick this sigma large enough, then we show that the, the, the, this average case lip problem where you get your, your instance sampled from this, this distribution is actually the worst case, which means that if we sample our public key from this distribution, we know there's nothing fishy going on. Like this, if people are able to break that, they are able to break any worst case instance over this class. Okay, so that's, this, these two things kind of give us a solid foundation to resolve. What is the concrete cryptonize that we, we have at the moment? So any decodable letters gives us such an encryption scheme in the, the way I explained earlier. And the best known attacks that we, we know currently is just generic latch reduction. And so what's the cost of this? Well, if we try to find the shortest factor in the, in the latch itself, then we know that this depends on this, this gap between the, the first minimum and this Minkowski bounds that I introduced earlier. But if two ledges are isomorphic, then their duals are also isomorphic. So we can also run this attack on the dual. So you also have to make sure that this, this dual gap is, is small. And lastly, of course, you can always just try to decode directly. And that depends on this decoding gap. So let's look at a few of these, these decodable lettuces in which you can decode efficiently. And this, this prime ledge that I introduced before, so we have this logarithmic gap, both for decoding and for the first minimum. But unfortunately, this, this dual gap is still of, of size squared N. So we could use this lattice for this crypto, but it would still achieve kind of the same security as these entryway things by attacking the dual. But maybe more surprising is that if you just look at the, the simplest lettuce of all, just the orthogonal lettuce, then this orthogonal lettuce achieves exactly the same kind of geometric gaps as these entryway and LWE constructions. So in some sense, we can reach the same performance that we get with entryway and LWE, just using the most basic lettuce that exists. And if we want to improve on this, we have, for example, the barn's ball lettuce for all these gaps are of size, the fourth root of N. So what does that then gives us for like possible future work of any of you want to, to work on that? If you want to build a scheme on Z to the N, it has similar security to just these LWE and entryway schemes. So for example, if you take Z to the 1024, then to recover this, to break lip, you need to solve SAP in dimension around 440. So that's similar to these entryway things. But at the same time, it's extremely simple and efficient to decode in this lettuce. So that makes your implementation much more, much, much better and simpler. And if we look at this barn's ball lettuce that I mentioned before, in the same dimension, to break it, you might even need to solve SAP in dimension around 780. So while this lettuce has the same dimension to break it, we require way more, way more capacity. And note that all these algorithms are exponential in this parameter beta. So almost doubling it, like this, it's, yeah, improves the security levels by a lot. But of course, the Holy Grill that we haven't found yet, unfortunately, is a lettuce where all these gaps are polylocoramic in the dimension. And what this would imply is that we have an encryption scheme, for example, where this, in some dimension N, and to break it, you also need to solve SAP in, well, that dimension N, or maybe slightly less. So to conclude, any lettuce, you can put it in our framework and you get an identification scheme. You can put any decodable lettuce in there and you obtain an encryption scheme. You can, I haven't talked about this, but you can take any gosh and sampleable lettuce and you obtain a signature scheme. And in such a way that the security of these things only seems to depend on their geometric quality. And just the orthogonal lettuce, the simplest of all is already enough to match the security of LWN and true. But of course, the end goal is to do even better, for example, by this barn's ball lettuce. Thank you for listening. We have time for questions. Any questions to Vessel? Thanks for the nice talk. I started with, I think, a very naive question. So if you have given your basis as a grand matrix, how do you do encryption then? Okay, so the thing is that this crumb matrix still allows you to do basically everything that you normally can do with the lettuce. So if you move from lettuce to credit forms, what you basically do is you restrict yourself to the lettuce C to the N. But instead of using the standard Ukrainian inner product, you now use the inner product that's defined with respect of this credit form. So everything works exactly the same, but you just use a slightly different inner product. You can also just run LLL on grand matrices, run VKG on grand matrices, all makes sense. Okay. And unless I take the time of other people, maybe I can ask one more question. Yes, please. So you say that you need a Gaussian sampleable lettuce to get a signature scheme, but any lettuce gives you an identification scheme. So I assume the signature scheme you're talking about doesn't use Fiat-Tamir. Because using Fiat-Tamir, it seems like you can get it from the identification scheme. Oh, sure. Yeah, yeah. So that's the general way, but that's of course not so efficient. And if you really have a Gaussian sampleable lettuce, then you can do like a competitive signature scheme. Okay, thanks. Thank you. Are there any more questions to vessel? Yes. Hello. Thanks for the talk. So here you are. You have this average case to worst case reduction and the LIP. This looks hard in general, right? But once you fix one type of lats, then couldn't this problem be easier for this specific class of lats that you are considering? Yeah, so you might expect that. But all the research so far, even trying to solve this problem just for C to the N, so you have C to the N and you get some rotation of C to the N, the best attack we know is Lats reduction. And there has been more research on this, but so far it seems if you can find any lettuce in which you can do anything better than the Lats reduction, I would be very interested, but we haven't seen them so far. I don't hear you, sir. Hey, thank you. I think we don't have more time for questions, so you should take it offline. The next talk is by quick bank Liu, and this is an online talk. Hello. Could you help me? Yes, we can hear you, but we cannot see you now. Could you see my screen share? So I did that. Let's see. Could you see the screen share? Yes, we see your screen. Okay, yes. So let me test again. So let's go to the next slide, right? Sorry, I didn't get any feedback from you. Sorry, could you see the slides? Yes. Okay. Okay. Yes, we do see your slides. Yes. Okay. So should we start? Yes, please. Okay. Yeah. Okay. Thanks. So today I will be talking about our work on quantum algorithm for variants of average case lattice problem via a new technique we call filtering, and this is a joint work with my co-author, Eli Chen, and my PhD advisor, Mark Zender. So in this talk, I will start by recalling the two lattice problems that are widely used in cryptography, namely the short integer solution and learning with errors problem. I will then introduce the variants that we consider in this talk and show how our algorithm works. So although our algorithm is novel and solves various lattice problems in the interesting parameter regime, now often results give applications to solving shortest vector problems, which is the most fundamental lattice problem. So I would like to say that our work does not solve SVP, but still, I believe, provide interesting directions toward this ultimate code. So let's start by recalling the precise definitions of the following two problems, and we will all compare them with the variants we considered in our work. So the first one is the short integer solution problem. The problem is to, given a random and wide matrix A, find a non-zero solution X such that A times X equals to zero. In other words, we want to find a non-zero vector in the kernel of A, and the additional constraint is that the solution should be short, and this fails the trivial Gaussian elimination as basically a random solution will be large. Here we go. So I formally proposed the following problem in 1996, given a random matrix A of dimension n by m, and m is usually set as m log n. The goal is to find a vector in the kernel of A with its L2 norm bounded by some b. And next is the famous LWE. The learning with error problem asks to find a secret vector s, and together you are given an oracle. And you can click the oracle polynomial many times, and every click on the oracle returns a random linear combination of the secret plus a small amount of noise. And more formally, each click will give ai and yi. Well, this yi is the inner product with the random vector ai and the secret s, plus some noise. And the goal is to recover the secret vector by just clicking this oracle polynomial many times. And there are different choices for the noise distribution, and one common choice is discrete Gaussian. So but why these two problems are important to cryptography, I think I probably don't need to explain too much here because there are many experts in this Europe knows this much, much better than me. But still, I want to say a little bit, and this also motivates why we care about this variance of sys and LWE. And first of all, they give basic crypto applications like public encryption and data signatures. And moreover, with this rich structure of LWE, there are many advanced applications, including fully homomorphic encryption, attribute-based encryption, or even applications to all those quantum protocols, like proof of quantumness. And secondly, there are these problems are conjectured to be hard for quantum computers. And one example is that among our candidates of the NIST post-quantum cryptography standardization, almost half of the schemes are based on these problems or their variants. And the third reason is that for some parameter regime, we know that their average case harness is implied by solving approximate SVP for all addresses with certain approximation factor. And since many people believe that approximate SVP is hard, this gives more justification about the security. So in this work, we will focus on variants of sys and LWE problems and try to solve them by quantum algorithms using a NOAA technique called filter. And we're not going to focus on applications of these variants, but more on their quantum solvability and the impact on SVP. And after all this, the discussions, let's look at the definition of these variants. So record, this is the standard sys problem where we are asked to find a small L2 norm solution of the linear system. And in this work, we consider a problem. Well, the matrix A can be very, very wide. And the solution is shot in L infinity norm. So that is the L infinity norm of x is at most q minus c over two for some constancy. So here c is some constant. Namely, this x, well, x is the solution, is only allowed to take any value except the largest c values in the field sq. As an example, you can imagine q is, let's say five and c is one, then each audited of x can only be zero, one, or four, because four is, let's say minus one. So our results show that when the matrix A is very wide, namely q minus c, q times n to the c times q log q, because c, with that c to be a constant, this is still a polynomial. There is a quantum algorithm that can find an x with a non-trivial infinity norm, satisfying A times x equals to zero, and also the infinity norm of x is bounded by q minus c over two. So the running time and the number of samples already depends on n to the c, and this somewhat limits the application of the algorithm. And when the c becomes large, let's say just log n, then we don't know how to design an efficient quantum algorithm for that. And as a corollary, when the modulus is three, we can find a binary solution in roughly quadratic time, and even this is interesting and not known before. And although in some cases, we know that a solution bounded in L infinity norm is a solution bounded in L2 norm, but our algorithm doesn't give any c meaning for in L2 norm, because the first reason is that even if we look at a single coordinate, it can be as large as q minus c. So even a single coordinate would be very large. And the other reason is that the matrix A is super wide, namely proportion to n to the c, and this would easily kill the L2 norm. So this is the shot integer solution problem and our result of that. And next, let's look at the variant of LWE problem. So this is the standard LWE problem where you are given an oracle that outputs classical samples. And the problem we are considered in this work is something we call LWE with quantum samples or SLWE for short, how this S stands for solve. In our paper, we have another related notation, but due to the time constraint, I will just ignore it here. So let's talk about this SLWE problem. So instead of given classical samples, you are now given quantum samples. Every sample contains a random coefficient of vector AI. It's exactly the same as the standard LWE problem. And the inner product is now, instead of given as a classical sample, a classical string is now given as a quantum state, where the error is actually in superposition. And the problem is to given polynomially manage such quantum samples and find the secret vector S. And here we can consider any error amplitude F, not only just the discrete Gaussian, and indeed, we don't know how to solve SLWE with discrete Gaussian. And we will talk about that later in the talk. So, and know that if we, for a SLWE problem, if we measure the quantum state, namely this cat YI, and we are going to get an error across into the distribution F squared due to the fact of the quantum mechanics. So the S cap LWE with amplitude F can be reduced to the regular LWE with error distribution F squared. So in other words, if we can solve LWE with F squared, we can solve this SLWE with amplitude F. So our results shows that there's a polynomial time quantum algorithm that finds the secret vector S if the DFT of F is inverse polynomial, which I will explain later shortly after, and M is sufficiently large polynomial. So here this DFT of F basically means if we apply Fourier transform on the function F, which is the arrow amplitude function. And let's say we get another function F hat, then the minimum value of F hat is at least inverse polynomial. So that's the definition of the DFT of F is being inverse polynomial. So this is our result for SLWE. So before going to the details of our algorithm, let me also remark that the quantum samples we define are different from the problem of so-called LWE with quantum samples defined in the previous work by Grillo, Cranidis, and Zistro. In their definition, the quantum LWE samples, the arrow E is classical, but this A is actually a quantum state. So in our case, A is a classical and fixed string random vector, and the arrow is actually in simple position. So their variant of quantum LWE is actually easy to solve, but the idea in their algorithm does not carry to the quantum LWE variant where interested here. So I would say there are two different problems, which they recall different solutions. Okay, so all right. So in the rest of the talk, I will talk about how to solve SLWE, and since our variant of CIS can be solved following the existing reduction, basically by applying QFT, we will only focus on SLWE. Okay, so let's zoom in a single quantum sample, which is this A and cat Y. So let's first look at what this cat Y is. So you can view this cat Y as a vector that is centered at S times A. So if you look at this function, this arrow function F, you can see and also you flatten this cat Y as a vector, you will see it's actually a vector surrounded at S times A. So with this A and cat Y, if you can learn S times A from this quantum state with certainty, let's say, then by collecting enough samples, you could easily recover the secret vector S by just doing a Gaussian elimination, because you just, you learn A and you learn S times A from cat Y, so you can and by collecting enough linearly independent such samples, you can just do Gaussian elimination. You can just recover S, but since it won't be that easy. So let's say for every value T, let's start by defining this notation cat HT to basically denote the quantum state that's centered at T. So in other words, it's basically this state, the state in the middle of the screen with shift T. So the quantum state, the quantum sample, which is the cat Y we get, well equal to one of this cat HT. And if we know which cat HT it is, we know that S times A must equal to that T. So let's all start with a very easy example, assuming all this HTR orthogonal. And then given a Y, we can learn S times A very easily, right? This is because the arrow amplitude, sorry, this is simply because we can, this arrow amplitude is known to the adversary and one can simply construct an efficient distinguisher for the set of state HT and therefore you can simply tell this cat Y equals to which H of T. But however, this example is very ideal. The problem is because they are very much unorthogonal. So the only case that they are orthogonal is that the arrow is a fixed shift, in which case it's actually equivalent to having no arrows. So this case will never happen. So indeed, if we consider let's say just the HT and HT plus one, these two adjacent vectors, and when the function F, the arrow function is kind of flat, then the overlap of these two quantum states are very large and there's no way for them to be orthogonal. So this is the problem we need to deal with. So therefore we propose the following way to distinguish the set of non-orthogonal quantum state HT. So what we do is we first write down the set of vector in a matrix. We know that these vectors are not going to be orthogonal anyway. And our next step is to simply apply Gram-Schmidt and make them also normal. So this is what we do. We apply this normalized Gram-Schmidt to make them also normal basis. Here this NGS denotes normalized Gram-Schmidt. So then what we do is let's basically we measure this state Y under this set of basis. So note that since we know the function F, which described this HT to HT plus Q minus one, and also this measurement is actually overlock roughly log Q base. This measurement can be efficiently implemented. So we just apply this measurement on the quantum state catch Y and let's say we get a result Z. And if Z equals to zero, what we learn, actually we didn't learn anything because that simply this catch Y can be any of the vector, can be HT or HT plus one or even HT plus Q minus one. And we know that all these vectors overlap with each other. So therefore regardless of what Y is, it's always overlap with the first vector HT. So therefore it's always possible to see the outcome being zero. So if Z equals to zero, there's no information and doesn't rule out anything. But when the outcome is one, it is actually a very interesting case. We do learn some useful information. So what we know is that S times A cannot be T. So why is that the case? It's simply because if this catch Y equals to HT, then the result will always be zero. So if we see it's not zero, that means it's one, then it's this S times A cannot be T. So in this case, we roll out one possibility that is S times A cannot be T. And similarly, if the outcome is two, we know that S times A cannot be T and A cannot be T plus one. And this is simply because if Y is HT or HT plus one, then it is actually in the subspace, in the subspace spanned by HT and HT plus one. And the outcome can only be zero or one. And as we see, the outcome is two, it is impossible. And finally, if the outcome, let's say, is Q minus one. And with the same reasoning, S times A cannot be T cannot be T plus one cannot be T plus two. And the only possible result is S times A equals to T minus one. So in that case, we actually get a equality and we roll out Q minus one possibility and learns S times A with certainty. And this is what we call filtering. And you can view this as a generalized ambiguous state discrimination for a set of quantum states. And when the outcome is Q minus one, it gives you an accurate answer without any error. Otherwise, it just tells you, it doesn't know. And we will not use the information when it doesn't know anything about the quantum state Y. So this is the general framework of filtering. So let's proceed. So basically for fixed T, here T is kind of, we'll pick the T, it's maybe independent of the Y. So if T plus Q is not equals to S times A, then the outcome will never be Q minus one. Since this catch Y is in the span of the first Q minus one vector. So the outcome will always be the first Q minus one outcomes. So if T plus Q minus one equals to S times A, we have some chance to see the outcome Q minus one, but not always, since we know that every vector overlap with each other. So the question is, what is the probability that the measurement outcome is Q minus one when this T equals, a T minus one equals to S times A. And we show that the probability actually polynomially depends on the inverse of the DFT of F, which is the quantity we mentioned in the theorem statement before. And this quantity DFT of F also depends on the shortest vector in the above Gram-Schmidt process without normalization. In other words, here we put normalization, but it actually depends on the vector without normalization. So I'm not going into too many details, but the relation of the Fourier transform actually comes from the fact that this matrix we're going to Gram-Schmidt is actually a circular matrix. So here's an example. We can have a closer look. So this H zero is actually can be written as E zero E one to E Q minus one. And similarly for H one by the definition, it's simply a shift of the first vector by one index. And similarly for H two. So if you look at that, it's indeed a circular matrix. And then we take advantage of this being a circular matrix and analyze the shortest vector after doing Gram-Schmidt on this matrix. So here are some examples of the distribution and their DFT. So I think I probably don't have enough time to explain all the pictures, but I would say let me change it to laser. Yes. So basically for Gaussian, we say that the algorithm depends on the DFT, which for this case is negligibly small and depends on the GS lens, which is negligibly small. So we don't know how to solve SLWE with Gaussian distribution. But for Laplace and the bounded uniform, we have a polynomial time algorithm. Although you see it looks small, but it's indeed non-negligible. And this is also, the shortest vector is also non-negligible. And same for bounded uniform distribution here. Right. And the last one is actually for doing the SIS problem. It's actually this F is actually the DFT of the SIS distribution, which is the bounded uniform distribution. And I'm not going to expand too much here. All right. So let's quickly recap the algorithm. So given a quantum sample, what do we do? We first pick a random T and we hope S times A will equal to T minus 1. So this is the first guess. And then we define the measurement, which is the measurement defined by this normalized grand machine. And let's say after applying this measurement on Y, we get the result Z. Then there are two possibilities. One is that Z is not equal to Q minus 1, which is not the last possible measure of the result. Then we simply skip because we don't know anything with certainty. And if Z equals to Q minus 1, we know that S times A equals to T minus 1. And then we got one equality. So therefore, when Z equals to Q minus 1, we guess it correctly. And with roughly incorrect guess, we can recover S by simply doing Gaussian elimination. And our work basically shows that this incorrect guess happens with inverse polynomial chess. So therefore, we conclude our result by saying there is the polynomial time quantum algorithm that finds a secret vector if the DFT of F is non-negligible and M is sufficiently large polynomial. All right. Okay. So let me say one more sentence. So I would like to discuss one frequently asked question that is why this does not solve SVP. So there are basically two different directions. One is from SVP to SIST, one is from SVP to LWE. So for the first direction, because our algorithm does not give a short bound on L to norm and the matrix is super wide. So this route doesn't work. And for the next one, we know that our algorithm for SLWE does not work for discrete Gaussian. So that also fails the second possible route. But there might be other way, which I leave it as an open question and interest direction for future research. So with that, I conclude my talk. Thank you for your time. Thank you very much. We have time for one short question. Okay. Let me finish. Stop sharing. Okay. If there are no questions, then let's thank the speaker again. Thank you. Okay. And the last speaker of the session is Benjamin Wesolowski. We'll talk about endomorphisms. Thank you. So yes, I'm going to talk about orientations and the super singular endomorphism ring problem. And to make sure that everyone in the room has something pretty to look at while I ramble about isotonies, you're going to find a few pictures of Leon where as a reminder, your crypt is happening next year. So first, I'm going to introduce these objects that I'm going to use in this talk. Super singular elliptic curves, isogenes, endomorphisms, what are these? First, elliptic curves. I'm sure many of you are already familiar with the notion of an elliptic curve. The technical details of what an elliptic curve is are not going to be very relevant in the talk, but just quickly, it's a geometric object. In cryptography, it's typically defined over a finite field. It is the set of solutions of an equation of this form. Do I have a pointer? Oh, it's very weak. Y squared equals X cubed plus AX plus B. So that is a curve, an algebraic curve. And it's not just a curve, it's also a group. It has a group that is written typically additively. And, well, these are the basic objects. Now, what is anisotony? Anisotony is a map between two curves. Anisotony sends points from one curve here, E, to the other curve here, F. It's not just any map, it's a map that preserves certain structures. In particular, you want it to be a group homomorphism. So elliptic curves, isogenes, and now what are endomorphisms? Endomorphisms are a particular kind of isotony. They are isogenes from a curve to itself. It's just a restriction on the fact that they have same domain and co-domain. So now if you fix a curve and look at all of its endomorphisms together, they form a structure which we call the endomorphism ring. It is a ring in the algebraic sense. You do have two operations on it, an addition and a multiplication. The addition is given point-wise by this simple rule and the multiplication corresponds to composition. So from this definition, you can guess that maybe the endomorphism ring is not a commutative ring because composition of maps is not typically a commutative operation. Thank you. Yeah, not much. Well, okay, I'm going to try not to use the pointer too much, or maybe I can use the mouse. Do you see it? Yeah, okay, I'm going to use that. Okay. So you may wonder, oh, but now it's not connected to the, I'm going to use the other one. You may wonder what is the structure of the endomorphism ring? Well, first very simple thing we can observe is that it's a ring that contains the ring of integers as a subring and that's containment of the integers inside the endomorphism ring is what we typically know as scalar multiplication. If you give me an integer m, there is an endomorphism that corresponds to multiplying points by the integer m. But we know much more than that. We know that if we focus on the additive structure of the endomorphism ring, what we get is always a lattice. It's a lattice of dimension either two or four. So two or four, which is it, it can be either, but the case that is going to be interesting for us and in general for, for isotomy-based cryptography is the dimension four case that is what we call the super singular case. We say that analytic curve E is super singular, if it's endomorphism ring is a lattice of dimension four. So from that definition, naturally follows a computational question known as the endomorphism ring problem, which I'm going to write and rank. If you are given a super singular elliptic curve E, you may ask, can you compute its endomorphism ring? So what do I mean by computing the endomorphism ring? If I give you a super singular elliptic curve E, you already know that the endomorphism ring is a lattice of dimension four. And what I mean by compute it, I mean, can you find a basis? Can you find four endomorphisms that generate all the others? And that is the endomorphism ring problem. So it's a problem that plays a central role in isotomy-based cryptography. It has recently been proved to be equivalent to another problem known as the isotomy path problem, which is the problem, if I give you two elliptic curves, can you find an isotomy between them? And this problem is the problem that gives it its name to isotomy-based cryptography. Isotomy-based cryptography is isotomy based on the idea that finding these isotomies is difficult. So what does it mean to base cryptosystems on these problems? It means that you have computational reductions between them. You can show that often cryptosystems that are called isotomy-based have a security that reduces to these problems and reciprocally, but less often, you have reductions in the other direction that would be security-proof spring that if these problems end ring an isotomy path are indeed hard, then the cryptosystems are secure. So reductions often in one direction, sometimes in the other direction, it doesn't feel like a very sharp picture. It doesn't feel like we understand the situation very well. And in fact, in practice, what happens is that we design cryptosystems and relate the security of these cryptosystems to other computational problems that feel a bit more convenient to work with. Problems known as the uberisotomy problem, vectorization, the oriented biffy-herman problem. So we have a lot of problems and the picture doesn't feel very sharp. And the goal of this project and of this talk is to get a sharper picture of the situation. What are these problems, which are equivalent, which are harder, which are simpler, which relate to which cryptosystems? So to talk about that, I need to introduce a notion of an orientation, which plays a central role in many of these problems that I introduced. So as I said, the endomorphism ring is a lattice of dimension four. Now, if you pick any endomorphism of your elliptic curve, any that is not too simple, so you don't want to work with alpha that is just a scalar multiplication, then you pick this endomorphism alpha and you can look at the ring it generates. What is the smallest subring of the endomorphism ring that contains alpha? It is written as z brackets alpha, or the ring that contains all the integers and alpha. And it is always a subring of dimension two. The subring of dimension two, in other words, it's what we call a quadratic ring or a ring of this form. It's a quotient of the ring of integer polynomials by a quadratic polynomial. So the endomorphism ring has dimension four. These are dimension two subrings. You have many of them. What you can do is fix a priori, a quadratic ring O and wonder how many copies of O do you find in your endomorphism rings? Can you embed this ring inside the endomorphism ring? And that would be an orientation. An orientation is an injection of your quadratic ring O inside the endomorphism ring. And you may have many ways to do that. Each of them provides a different orientation of your elliptic curve. Now, an oriented curve is a pair of a curve together with an orientation of the curve. So it's more structured than just an elliptic curve. It's an elliptic curve with extra information. It's what we call an oriented elliptic curve. And it happens that if you fix the quadratic ring O, there are only finitely many O-oriented elliptic curves. You can write this set. We're going to write this LO of P. It's the finite set of O-oriented elliptic curves over a fixed field, of course. Okay, so why do we look at these structures, these orientations? It seems like elliptic curves were complicated enough already. Now, we add more structure. Well, it's interesting because each quadratic ring comes with a finite abelian group, the ideal class group of the ring, which we write like this CL of O. And this finite abelian group acts on the set of all oriented elliptic curves. What do I mean by that? I mean, we have a group action, which is an operation here written star. It's an operation that takes as input a group element here A and an oriented elliptic curve here E and constructs a new one written A star E. So for it to be a group action, we require two properties. It is compatible with the group operation. So letting the first element A act then another element B act is the same as taking the product and let this product act. The second property we need is identity, the trivial element of the group act trivially. So these actions are interesting because you can build cryptosystems out of it. And one example is the seaside cryptosystems. So seaside is not the first cryptosystem that is based on this idea, but at least it's the first that instantiate this idea in the context of super singular elliptic curves. So here's quickly how it works. It is a key exchange very similar to the Diffie-Hellman key exchange. In fact, the D and H in seaside correspond to Diffie-Hellman. So it's a key exchange between Alice and Bob. And the first thing they do is establish a setup in public setup. Everyone knows about it. They have to choose a reference oriented elliptic curve. So they pick an elliptic curve E0. Everyone knows about it. And in the specific context of seaside, they don't use arbitrary quadratic rings, but this particular ring is zero joint square root minus P. But you can think of this protocol in more generality and just imagine that it's an arbitrary orientation. So what Alice does, she samples a secret group element, lets it act on E0, and sends the result to Bob. Now Bob can do the same thing. He samples a secret element B, lets it act on E0, and sends the result to Alice. And now Alice can take what Bob sent her and let her own secret act on it, and Bob can do the same. So this is extremely similar to Diffie-Hellman. You've just replaced the exponentiation by letting a group act on the elements. The main difference is that, well, the elements that are transmitted are not group elements. They live in a space, the set of all oriented elliptic curves, which does not have a group structure, and that makes it resistant against quantum attacks. So is it actually secure? You can look at a problem similar to the, okay, I should have said that earlier, the two curves you get from Alice and Bob's side actually coincide by the properties of the group action. If someone is spying on the network, they see E0, which is public knowledge, and the two curves that were exchanged, A star E0 and B star E0, and the question is, can they recover the secret? And that is known as the seaside problem, very similar to the Diffie-Hellman problem. Okay, so how does that relate to the endomorphism ring problem? To answer this question, we need to go through another problem known as victorization. Victorization is to this Diffie-Hellman problem, what the discrete logarithm problem is to classical Diffie-Hellman. So how does it look? Here's a reminder of our version of the Diffie-Hellman problem, how it looks like. And the victorization problem is the following. You are now given two O-oriented elliptic curves, E and A star E, and the question is, can you find A? So are these problems equivalent? It seems like we have an obvious reduction, at least from Diffie-Hellman to victorization, just like you have a reduction from classical Diffie-Hellman to the discrete logarithm problem. If you can solve the discrete logarithm problem, you can break a Diffie-Hellman. And here it's also the case, although it's a little bit more subtle, it's not so evident, because actually applying the action of an element is not always easy and that introduces technicalities. But we do have a reduction, a polynomial time reduction from the Diffie-Hellman problem to the victorization problem. Now in the other direction, that is the first result I'm going to present. We do have a reduction if we allow quantum computation. There is a quantum polynomial time reduction from the victorization problem to the Diffie-Hellman problem. And we can prove it, assuming the generalized Riemann hypothesis. So this relates to previous work. Previously it was known under sub-exponential reductions, sub-exponential instead of polynomial, and assuming some heuristic assumptions. So now we've replaced sub-exponential by polynomial and we've replaced the heuristics by the generalized Riemann hypothesis, which is probably easier to believe. So I'm going to draw a big diagram with the problems and the equivalences that I'm going to present. So here's the beginning of the diagram. I've introduced these two problems, this victorization problem and the Diffie-Hellman problem. And we care about this Diffie-Hellman problem because it's essentially equivalent to breaking his seaside, at least when instantiated with the particular quadratic order used in seaside. And there is a proof that you have a quantum, sorry, a classical reduction from Diffie-Hellman to victorization, and there is a quantum reduction in the other direction. So they are essentially quantumly equivalent. Now how does all this relate to the endomorphism ring problem? Because I've spent the first half of the talk explaining you what the endomorphism ring problem is and trying to convince you that this is the fundamental problem we care about in ice arching-based cryptography. We want to really build cryptography based on the assumption that end ring is hard. So what is the connection there? It doesn't look like these victorization problems are very similar to finding endomorphisms. So to show the connection between victorization and the endomorphism ring problem, I need to introduce a slight variant of the endomorphism ring problem called the O end ring problem. It is the same problem, but as input, I don't just give you an elliptic curve, I give you an oriented elliptic curve. So a priori that makes the problem a little bit simpler because I ask for the same thing but give you more information. The orientation is additional information. So as a reminder, here's the victorization problem and it's hard to tell whether there is a link between both of them. One is about finding the basis of the lattice and the other one is about inverting a group action and we can prove that's the second main result that they are actually equivalent. Again, assuming the generalized Riemann hypothesis, but most importantly, assuming that the discriminant of the quadratic order has a known factorization. So two consequences from that. First, it means that without this restriction on the factorization, we do get a quantum polynomial time reduction. And second, I'm going to say that classically it's not a problem either because typically this problem is instantiated in crypto systems where the discriminant of the order used is essentially a prime number. Therefore, it has a trivial factorization so we don't need to factor. Again, that relates to some previous work. Some work was done in the setting of C side. So with this particular ring, this particular case where O is zero joint square root of minus p and again that was only known in a sub exponential time. So this sub exponential again becomes a polynomial. So we can extend our diagram a little bit. I've introduced a little variant of the endomorphism ring problem and proved that the two problems are equivalent. Now that relates to the classic endomorphism ring problem through an obvious reduction. As I said, the oriented O end ring problem is a little bit simpler because I asked for the same thing but give you more as inputs. So you have a trivial reduction. And do we have actually an equivalence? Well, in general, probably not. Actually, in general, we have better algorithms to solve vectorization hence the oriented end ring problem than we have for the classic end ring problem. So in general, we cannot go back. But in some settings, we can go back. In particular, in the setting of C side, we can go back. So concatenating all these arrows, we can see that breaking C side is equivalent to the classic end ring problem with one of these steps being quantum. So is it really the classical end ring problem? Because here I wrote end ring for O orientable curves. Is that much of a restriction? So it is in general a restriction and it is a little bit of a restriction here. What that means is that we restrict ourselves to curve defined over Fp, the finite field with a prime number of elements. So this shows that breaking C side is equivalent to the endomorphism ring problem or curves defined over a finite field Fp, as opposed to, in general, an extension of Fp. So this problem is equivalent to the isotony path problem. It relates to a few isotony base cryptosystems. Clearly, it relates to breaking C side. But to more of them, and I'm not going to list them all, but not all of the schemes we know relate to this classic end ring problem. Is there a way to show that breaking the endomorphism ring problem breaks all of isotony base cryptography? Is there a problem that we know that all schemes reduce to? Well, there is essentially such a problem that is the uber isotony problem, which was introduced precisely to show that you can break this one, then you can break essentially all isotony base cryptography. This is Paul Bocuse, by the way. So this problem is very similar to the vectorization problem, but it's at the same time much harder. So we ask for the same thing. It's essentially the same. I give you two curves, and I'm asking, can you find a group element that sends the first curve to the second one? But in vectorization, I give you two oriented curves, so the curves together with their orientation. And here I give you only one oriented curve, and the second one is orientable. So you know there exists an orientation, but I don't give it to you. So that should make the problem harder. This problem was introduced in that paper with many authors, where they show that SIDH, CSIDE, OSIDH, SHETA, all these cryptosystems reduce to o uber. So is it equivalent to the endomorphism ring problem? Well, let's see. We can introduce another variant of the end ring problem called the o end ring star problem. Again, very similar. So this time I give you an orientable curve as input, so a curve without its orientation. And I ask you to compute its endomorphism ring, and in addition to that, I ask you to find the missing orientation, the one that I promise to exist but didn't want to give you. And that's the third result I want to present today. These problems are also equivalent. And there's very same restrictions we had for the previous two problems. We assume the generalized ringman hypothesis, and we assume that we are given the factorization of the discriminant. Okay, so we've introduced this problem, which is interesting because it relates to many cryptosystems. This variant of the endomorphism ring problem is proved to be equivalent under classic polynomial time reductions. And we have an obvious reduction from the end ring problem to o end ring star. Because o end ring star is really the same problem except I also ask you to find an orientation. So it's strictly hard, I ask you for more. So rearranging these things and cleaning up every, oh, yeah, one more remark, sometimes you can go in the other direction. So sometimes o end ring star is not actually harder. When the discriminant of the order is not too big, then you have an equivalence between these problems. What I wanted to say, rearranging things and cleaning up a little bit, what we can summarize, the result as follows. You have three groups of problems, problems that relate to computing and the morphism rings, problems that relate to inverting a group action, and problems that are actually breaking cryptosystems. And we prove a bunch of equivalences between them. All the arrows in this diagram are new from this work except for this arrow here. And I have one last picture of Leon to show, but maybe it's better to just leave the summary if you have any questions about it. Thank you very much. You have time for questions? Yes. So in the Delft-Scalbrife algorithm, they've solved its walk over curves over Fp squared by reducing to the curves over Fp and solving it there. And I was trying to fit this into this complexity graph, but I couldn't really grasp where it would fit. Okay. So Delft-Scalbrife, in a sense, solves the isogeny path problem. What it does is you're given two supersingular, or yeah, you're given two supersingular elliptic curves and you find paths from each of them to curves defined over Fp. And then you find a path between these two, knowing that you work in a smaller graph because you have less curves defined over Fp. And in the end, what you get is a path from your first curve to your second curve. So that's a solution of isogeny path. And we know that this is equivalent to the endomorphism ring problem. So that gives you essentially the complexity of computing in the morphism rings. Or was that your question? Yeah, I think so, I guess. Thank you very much. Another question. Hi, thanks for the great talk. I was wondering if you could comment on like the tightness of your reductions, like do you give some sort of security guarantees for seaside with presently chosen parameters? No, so all the reductions are polynomial time with bigger of one in the experiment. Right, thank you. I'm sure you can look at the proofs and get tighter results. Any more questions to Benjamin? If not, then let's thank Benjamin again. And that was the last session today and we meet at the conference banquet at 7pm.