 All right, thanks. Let me start by comparing isogenic-based public key encryption to Diffie-Hallman. Both of these give us this non-interactive key exchange. So the setup here is that you have Alice generating some public key A times G. G is some standard that everybody knows in the space of public keys. Alice has her own secret A and multiplies that by G to get AG. And Bob does the same thing to make his public key BG from his own independently generated secret B. And that lets them both agree on some shared secret, which you can write in several ways. You can say Alice takes Bob's public key BG and multiplies it by her secret A to get A times B times G, which is the same as A times B times G, which is the same as B times A times G, which is the same as what Bob gets by multiplying his secret B times Alice's public key A times G. So they've agreed on the same element of the public key space, some shared secret, which they can then hash and use as an AES GCM key to authenticate and encrypt all their communications. There's lots of names for the different mathematical equations that are being used here, like A times B being B times A. OK, that's commutativity of the multiplication in the semi-group of secret keys, which is maybe a group. There's the associativity at the beginning here. Well, sometimes mathematicians have so much fun coming up with names that they have multiple names for the same thing. So this is when you're multiplying secret keys by public keys to get public keys, then that's not called multiplication anymore with associativity. It's called an action, which just means, OK, you've got associativity when you multiply the secret keys and then multiply the result by a public key. It's the same as multiplying the second secret key first by the public key and then multiplying by the other secret key. All right, this gives us a shared secret between Alice and Bob, which traditionally we do by maybe not the original Diffie-Hellman protocol. We use elliptic curved Diffie-Hellman, going back to Miller and Koblitz in the mid-80s. And both of these, both the original Diffie-Hellman with multiplicative groups and elliptic curved Diffie-Hellman, have some polynomial cost in lambda to achieve a security level of 2 to the lambda. Of course, to figure out what exactly that polynomial is, you have to start by saying, well, OK, what are the attacks? How can some attacker figure out something about the shared secrets starting from the public information, A, G, and B, G? And then once you know how fast the attacks are, which is different between, this is why we like elliptic curves, is that the attacks are a lot harder, once you figured out how fast the attacks are, then you have to figure out the constructive side of it, how fast are the algorithms to compute the action to multiply A times G to multiply A times B, G, et cetera. If the attacks get better than we know, then, well, the amount of key material that you might need, the key size might go up dramatically, it could even be totally broken, which is why the word pre-quantum is important here. The pre-quantum security level 2 to the lambda gives us some polynomial cost in lambda, but once you switch to post-quantum cryptography, then Diffie-Hellman and elliptic curved Diffie-Hellman are broken. If you look at what Schor's algorithm needs to break Diffie-Hellman and elliptic curved Diffie-Hellman, then it's basically using exactly the features that were used up at the top, the commutativity of multiplying the secrets and the associativity, but it's actually using a little bit more algebraically. It's using that you can add public keys. What you do in Schor's algorithm, you set up some, well, there's some lattice of pairs, say X and Y, such that X times G plus Y times A times G is zero, and that's some two-dimensional lattice which Schor finds you elements of. And, well, in what I just said, there was a plus. You can add elements of the public key space, which is exactly the wedge that isogenic-based crypto uses to make a post-quantum version of the same thing. At first glance, the story is just like the elliptic curved Diffie-Hellman story, that, well, there's some constructions due to some people with some sort of details where you analyze the attack cost of the CRS system, goes back to Kupenya and Rostov-Steven, Solbinov, and then much more recently, the seaside system from Azure Crypt 2018, that's from, well, some of the authors of this paper but more people, Sokastrik and Renes. And maybe I should say everything else I'm gonna say is gonna be the same between CRS and seaside, except for the details of the cost. Where seaside is, I don't know, a thousand million times faster than CRS, so you definitely wanna use seaside instead of CRS, but aside from that, the basic picture of what I'm talking about is the same. Both of these, just like Diffie-Hellman, elliptic curved Diffie-Hellman, have some polynomial cost to resist all known attacks pre-quantum, and the advantage is post-quantum, they also have some polynomial cost to resist all known attacks, that take time less than two to the lambda. All right, so this is one reason people are interested in isogenic-based cryptography, it gives us non-interactive key exchange just like the original Diffie-Hellman, but apparently surviving quantum computers. There's another reason that people are interested in isogenic-based crypto, even if they don't want non-interactive key exchange, which seems, it's kinda hard to achieve non-interactive key exchange post-quantum, we don't have very many proposals for that, but we have a lot more proposals for just doing public key encryption, and then within that space of public key encryption systems, well, there's a lot of interest in how small the public keys can be, and in general, how low the cost can be. Let me emphasize, you don't need to do the work of coming up with non-interactive key exchange in order to have a public key encryption system. For instance, the best known example of isogenic-based crypto, the SIDH Super-Singular Isogenic Diffie-Hellman, it's kinda misleading because it doesn't do what Diffie-Hellman does, it's just a public key encryption system, you need communication to agree on a key, but okay, it's now as a NIST submission for the post-quantum competition, it's called Psych Super-Singular Isogenic Key Exchange, this is a perfectly reasonable public key encryption system, which is well-known for having small public keys, and let me compare this to various other options for the public key sizes that you might get, and let's get a little more quantitative than just polynomial and lambda, maybe I'll start down at the bottom left here, so pre-quantum, suppose you want two to the lambda security, where security here is you want every attack to take two to the lambda operations in this talk, I'm not gonna be precise about what exactly an operation is, I'm also gonna ignore things like communication costs, so let's just say two to the lambda some simple operations, and you have to look at attack papers to see the details, elliptic curve Diffie-Hellman, if you want security level two to the lambda, you need something like two to the two lambda possible keys, because there's some square root attack, taking advantage of the attack against elliptic curve Diffie-Hellman exploits the well-known worst case to average case reduction for elliptic curve Diffie-Hellman, and does that in a way that reduces your security level for two to the two lambda keys down to only two to the lambda security, and then to transmit the two to the two lambda keys, you need about two lambda bits to communicate. Unfortunately, post-quantum, that's broken by Shor's algorithm, but we have lots more options, and that's what gives us, well, Psyche and Seaside, and again, CRS is very much like Seaside, except for being quite a few orders of magnitude slower, let me just focus on Seaside, Seaside is twice as big keys as traditional elliptic curve Diffie-Hellman, for the same security level, instead of, say, a 256-bit 32-byte key, you need, say, a 64-byte key. Psyche is quite a bit bigger. Now, here, there's two variations. You can do a compressed Psyche, which is only, what is that, seven times bigger than elliptic curve Diffie-Hellman, or seven halves times bigger, 3.5 times bigger than Seaside, or an uncompressed version, which I don't think is gonna last long-term because the compressed version is getting faster and faster. If you look post-quantum, then, well, if you just count the attack costs naively, then Psyche and compressed Psyche get somewhat bigger. If you look at elliptic curve Diffie-Hellman, again, that's completely broken, and then, well, what about Seaside? That's somewhere in the middle because there's a sub-exponential time attack against CRS and Seaside. This goes back to 2010, Childson, Jaun, Sukharov, used Cooper-Berg's sub-exponential time hidden shift algorithm, or Regev's follow-up algorithm, or there's another follow-up algorithm by Cooper-Berg. Same Cooper-Berg, but better than the first paper, actually also better than the second paper. I don't know how many people were at the quack workshop over the weekend, but you heard that Cooper-Berg 2.0 is better than Cooper-Berg 1.0, and better than Regev. Anyway, that is a sub-exponential time attack against CRS and Seaside, which means that you need super-linear key sizes in order to resist the attack. And now you can ask natural questions of what exactly the super-linear is. When does it cross beyond the 21 lambda? I mean, it's super-linear, so it has to be bigger than 21 lambda at some point. How big does lambda have to be? Is that of cryptographic interest? And so this starts me on my list of questions. If you have a particular value of lambda, say you want two to the 64 quantum operations to break your system, or even more, two to the 96, two to the 128 quantum operations to break your system, what key size do you need for Seaside? To achieve that security level. And this breaks up into lots of separate questions because there's actually a bunch of different attacks, and then these sub-exponential attacks in particular involve quite a few layers. They involve, first of all, some bunch of queries that if you look at how Cooper-Berg's algorithm finds a hidden shift, it's not as powerful as Schor's algorithm. It's not as fast. It has some rapidly increasing number of queries, but well, it doesn't increase exponentially. It's just sub-exponential. And what exactly is that number of queries? Well, even knowing that Cooper-Berg's 2011 paper is better than the previous ones doesn't help you figure out a concrete answer to this because he just does asymptotics, and it takes a lot of work to figure out what exactly the costs are. Another question, when you start digging down into the lower levels of the algorithm, you see that it's not actually convenient. You don't get the fastest attack under reasonable assumptions by generating a uniform random element of the group of secret keys. It's much faster to generate something which is not exactly uniform, but what does that do to Cooper-Berg's algorithm? It's not clear. And similarly, the computations get significantly faster if you allow some errors. You can do everything error-free, but if you allow some errors, it makes everything faster. I'll get a little more into that in a moment. And what effect does that have on Cooper-Berg's algorithm? Again, not clear. Neither of these questions is what this paper is about. This paper is about the third item on my list, which is how expensive are each of the queries? So these algorithms are doing a bunch, the attack algorithms are doing a bunch of queries, which means evaluate the isogenic action on a superposition of all the possible secret keys, and then how expensive are those queries? How long does it take to compute an isogenic on a quantum computer in superposition over all of the secret keys? And this is something where, well, we have a 56-page paper, which is what I'm not going to try to completely tell you about in the 20 minutes that I have. So look online on quantum.isogenia.org to see all the details of that. I will tell you a little bit about the results, try to get you interested in reading the paper. Another question that's not addressed in this paper is what if you look at communication costs? Or in general, you look at how much hardware you're using and how expensive is it really to run that hardware for the amount of time you need, which for a quantum computer means you're constantly doing error correction on all of your qubits. How expensive is that actual quantum computation? And that'll be another 50 pages to figure out in detail. Maybe the biggest takeaway message that I want you to remember before I get into the details of what kind of results we get is that you can check these quantum algorithms. There's a lot of mistakes in the algorithms literature generally, and maybe they pile up especially in the quantum algorithms literature because people don't have to actually verify the results. People don't have a quantum computer. And so they say, I'm sorry, I didn't run the algorithm. I've just proven it's correct, or maybe even not that, it seems like it's correct. And sorry, I can't try it out. I think that's a cheap way out. There's actually a bunch of quantum algorithms that we can check very confidently on our current computers. So what I want you to remember is that you can verify the cost of a quantum computation using your laptop. For instance, in the particular quantum computation that this paper is about, we have software. Go again to quantum.isogeny.org. You can just download software. What exactly is it doing? Well, it's running a quantum isogeny computation. Wow, forget the quantum. It's running an isogeny computation using a sequence of bit operations, ands, and ors, and xors, and not. And it also counts how many nonlinear operations and linear operations. There are people usually split these because the costs are different. In traditional hardware, the linear operations are actually a little more expensive. In quantum hardware, the nonlinear operations are much more expensive. But anyway, all of that is tallied automatically by the software. And then what does this have to do with a quantum computation? Well, there's some standard translations which say if you can do bit operations for your computation, if you have a directed acyclic graph of bit operations, and then you want to do a reversible computation, which constrains your bit operations. You can no longer create bits and erase bits. A reversible computation, you can do things like take three bits, x, y, z, and replace them by x, y, z plus x times y. So overwrite z with z plus the, like z, x, or the end of x and y. And that's a reversible operation. That's called a toffoli operation. And that is something that if you do it again, you get back to the original input. And there's a few of these operations where those are the basic operations you're allowed to use in a reversible circuit instead of and, and, or, and x, or, and not. And there's a completely generic way to take b bit operations and turn that into at most 2b reversible bit operations at the cost of using a whole bunch of space. I'll say a bit more about that in a moment. Once you have reversible operations, then you automatically get a bunch of quantum operations to do the same computation on a superposition of inputs. And the factor there is at most a factor of seven for each toffoli gate turning into what are called t gates in quantum computation. And you can also look at the cost of the linear operations and you get some smaller factor for those. And you can also try to optimize these, these, these. You don't always need seven times as many t gates as toffoli gates. You don't always need twice as many toffolis as ands and ores. When you try to optimize these, you might save a, well, two or three bits in the final runtime. But we're much more concerned with the big picture of 40 bits or 80 bits or 128 bits of security. So, okay, there's some limited factor there between the number of bit operations you need and the number of quantum gates you need. There's more you have to verify than just the operation counts. Like maybe you care whether the computation actually works, whether it computes isogenes. And that's something where we have some sage scripts compared to the output of these bit operations. It's really helpful to have your computer running these computations so you can just try and check. We also have some errors to speed up the computation. We limit the computation time to the point where we have a noticeable number of errors. And that's something where we had to do some generating function computations to figure out what the error rates should be and check that by pumping up the error rate to something we can measure by trying out the computation and seeing how often it works. All right, case study to finish things off in the last four minutes. Let me focus on seaside 512. So this is with 512 bits, 64 byte public keys. How secure is this? Prequantum, it's reasonably clear that all the algorithms we know it's similar security to 256 bit elliptic curves. What about with this sub-exponential time attack? Well, there's some assumptions here about what the group elements are. That's this minus 5 through 5, 74 integers between minus 5 through 5. And what error rate we're tolerating where 2 to the minus 32 might be good enough for running Cooper-Berg's algorithm, probably is good enough for running Cooper-Berg's algorithm. And under these assumptions, if you count the number of nonlinear bit operations, there's another paper which independently had an algorithm for this and we counted the number of bit operations for that. It's about 2 to the 50 bit operations, which we optimized a ton out of and got down to 2 to the 40. And then, well, okay, with even a little more work, our best result in the paper is 0.7 times 2 to the 40 bit operations, nonlinear bit operations to compute one C-side 512 group action under these assumptions about how many errors we're tolerating and how big the inputs are. All right, if you translate that from bit operations on the laptop to what is the quantum computer gonna take, then there's that factor of times two and times seven from the previous slide, using a whole bunch of qubits. I said if you have bit operations, you suddenly need b bits, which means if you start with 2 to the 40 bit operations, you have about 2 to the 40 bits, which turn into 2 to the 40 qubits. If you think that's a crazy number of qubits, you're probably happier with, you can, at the expense of a factor of four in the runtime, you can get the number of qubits down to 2 to the 20, which is much more reasonable. And actually, even if you can afford 2 to the 40 qubits, it's much more cost effective to use a bunch of separate 2 to the 20 qubits for doing this four times as many operations computation. If you care about the total gates, which some of the papers on this look at, in quantum computations, the t gates are about 100 times more expensive than the linear gates, but we counted the total as well, and it's a little more than the 2 to the 45.3 there, it's up to 2 to the 46.9. All right, if you want variations in all these numbers, of course the paper gives lots of other examples and all the software so you can try different sizes yourself. What does this mean for the complete attack against seaside 512? This is my last slide. Well, one issue is how big those inputs have to be, issues of how you can get a uniform group element, it's hard to do that inside the attack, at least it doesn't seem that the attack works if you just use the minus five through five. The reason that we chose the minus five through five as an example is that's what the seaside user does, normally, but the attacker doesn't seem to get a work in Cooperburg algorithm with that. There's another paper from Bonneton Schrotenlauer, which claims that you get about a factor of four slowdown from dealing with this. And they also claim that you need about 2 to the 32.5 queries using about a billion, two billion qubits in order to do the Cooperburg layer of the computation. So, 2 to the 32.5 times two squared times the 2 to the 46.9 from the previous slide gives you 2 to the 81.4 total gates, assuming that the two claims here are correct. Maybe those are overly optimistic or maybe there's some faster algorithms than what they were talking about. This paper is just focusing on the ratio between how expensive is each query and then you also have to figure out the number of queries, et cetera. If you count communication costs, then, well, this is still using a huge number of qubits and that's definitely gonna make everything more expensive because you have to be error correcting all those qubits, but we did not figure that out in this paper. Finally, let me comment on, in the Bonneton Schrotenlauer paper, they say under the same assumptions that actually it's only 2 to the 71 gates which comes from there being a thousand times too optimistic about the cost of doing the queries and we identify where the basic points of optimism are and say, well, this 2 to the 71 is not something that with any algorithm that we know it's possible to achieve. Of course, if you come up with something better then maybe you can get the 2 to the 81 down somewhat below 2 to the 80, try to get even below 2 to the 70. Maybe it's possible. And then with your quantum computer and 2 to the 70, qubit operations on 2 to the 31 or maybe even fewer qubits, you can break the 64-byte public key seaside 512. I hope I've gotten you interested in the paper. If so, look at quantum.isogeny.org and take a look at the details. That's it. Thank you for your attention. Any questions? So then you mentioned that you didn't do the fault tolerant stuff and you need 50 pages for that. So actually you don't need so much. Like we did this for Shah and so on. So it's pretty straightforward. If you have the gate counts and I did it now basically for you so I can tell you some numbers. So let me be clear about what the hard part is. So I agree that it's reasonably well understood what the costs are. For instance, doing the surface code if you know what your error rate is at the lowest level. The problem is that all the higher levels of the computation, we have not looked at what depth we need for doing any of these computations. So the isogeny computations have some chunks where we can parallelize it and some chunks where it seems to be that we need to do stuff serially. And doing stuff serially means that, well, you can figure out with this number of qubits how long you have to go but it's much more than you want. If we can find any sort of parallelization then it makes it much more effective when you do the quantum volume, the area times the time for your computation. I agree that at the lowest level things are increasingly well understood but at the higher levels we did not try to optimize what the depth is of the isogeny computation for reversible or equivalently quantum computation. And that would be necessary to really understand what the cost is. If you just do something naive and base it on what's understood at the low levels then I think you'll be overestimating the security. Thanks. Any other questions? Okay, well, let's thank the speaker again. Thank you.