 Hi everyone and thanks to Philip, Alina and Michael for organizing this nice online seminar. Feel free to interrupt with any questions. I should be able to finish within a lot of time, no worries. So I'd like to tell you today about efficient quantum factoring algorithms. So let's maybe, I'm not sure if it's needed for this audience, but let me remind you, factoring integers is all about. And if you have a number like 15, what you want to know is it's three times five, and then you knew that 21 is three times seven, 55 is five times 11. But as numbers become bigger, it's a bit more challenging already for 1843, you know, maybe some of you can guess or can figure it out and, you know, it would take a few seconds, but it's already a bit more challenging, right? 1843 is, I'll give you a second, but it's 97 times 90. What about 1851? Okay, that's even trickier. And there's actually a nice story about 1851. Let me show you the story. So Karl Pomerance shared this story, this experience he had from a high school math contest. So they asked him to factor 1851. And, well, he was fairly good arithmetic, right? And he was, he was sure he could easily try and divide it up to, you need to try to divide up to about 90. So he thought, okay, what's the big deal? But, you know, he wants to be clever. So let's try to figure out how to do it more actively without just trying to divide up to 90. And he spent a couple of minutes looking for the clever way to do it, but then started getting worried that he's wasting too much time. He did waste too much time and he missed the problem. So this fun story was probably quite traumatic for him. Later, he figured out how to do it. And the trick is this. You write 1851 as 8100 minus 49. And 8100 is the square of 90. 49 is the square of seven. So you have 90 square minus seven square. And that's just 90 minus seven times 90 plus seven. So it's 83 times 97. And that's actually the trick is actually what later led him to develop very fast factoring algorithms like the quadratic sieve and other algorithms, including the one we'll talk about today and short algorithm. I mean, he was a similar idea of what's basically happening here is we're finding a non-trivial square root of identity modulo 8051 in some sense. So you'll see it later coming up. This number, 81, we'll get back to it. So this is actually from a highly recommended enterprise award-winning article that he wrote in the notices of the AMS. So highly recommend reading that. This is from Karl Pomerance. Okay. Moving on. Factoring is a hard problem. Computers have trouble doing this. Factoring numbers. The current record is 250 decimal digits. I think it's 800 something binary digits. It took 2,700 computer years. Okay, this is a group. We did this a few years ago. You know, lots of computation power and that's currently the world record for factoring difficult numbers like the number here is an RSA number. It's a product of two big primes. Okay. So it's sitting in difficult. At least, you know, it seems so. It's not an NP-hard problem. Right. So it seems like it's a hard problem. The best known algorithm, the number field seems, you know, it's a beautiful algorithm. It runs in time exponentially in the cube root of number of digits. So it's, you know, it's highly nontrivial. The fact that something like that exists is amazing. But it's still, you know, it's still exponentially into the one third. So it's, you know, at some point, you know, it's still hits a wall and that sets around these numbers of 250 digits. And this is extremely important problem, right? We don't use it for any form of secure communication, you know, any form of encryption or signatures. You know, those things are critically, you know, rely on things like that on factoring on the fact that it's hard to factorize integers. So it was a big shock and a big surprise when in 1994, Peter Shore was shown here, found that factoring is actually easy. It's easy for quantum computers. So he found the algorithm, the quantum algorithm that can very quickly, very easily factorize integers. And this was a big shock because it's saying, you know, all the security, everything we do, everything we use to encrypt, you know, to send emails to connect to our bank, everything can be broken. The minute we have quantum computers. That was a big shock. And, you know, isn't really the end of the world because of Shore's algorithm. And well, not so fast, right? So luckily, starting last summer, you know, there's a transition happening where people and networks and browsers and websites transition to post quantum cryptography. So that's post quantum means cryptography that's secure against quantum computers. So it's not based at all on things like factoring integers is based on things like geometry of lattices in high dimension. Okay. So that's something that's already started. There's another reason why maybe you're not terribly worried as that may be quantum computers do not exist yet. Okay, so those things are don't exist at least big enough computers don't exist, but there's constant progress. So this was early on 2017, you know, this was IBM had some, you know, machine like that and a few years later, another one from Google and another one. And basically now it seems like every week there's a bigger and better quantum computer. Whether they'll be big enough to actually factor as numbers, I don't know. I can show you some nice, you know, figure that I took from Sam Jack and so what's what I actually don't know what's going on in this figure. It sounds very interesting, but I don't ask me too many questions about the details here. But basically what he's showing is this is kind of where computers are. Now it is actually, I think there's been some progress or maybe somewhere here and here are a few other points. And this is kind of where you need to be to factorize numbers to use Shor's algorithm. And there's some other milestones that don't go away at surface codes. And I don't know much about this, but basically that's kind of what you hear you have maybe a hundred cubits, maybe a few hundred cubits now. And for to break up to factor as numbers like RSA, you need to be tens or hundreds of millions of cubits really to factorize them well, because these are really physical cubits and it's difficult. Okay, so what is the purpose of today's talk? It's not about how to build a quantum computer. I really want to show you this maybe maybe there is a way to improve Shor's algorithm and and be able to factorize, you know, with worse quantum computers in some way, maybe we can somehow optimize the algorithm and factorize in an easier way. That's the goal. Let's see how we get there. Again, feel free to interrupt if there are any questions. So let's read this carefully. Okay, so this is the main result. There is an algorithm. And it relies on some mild number theoretic assumption, even though this is a seminar about a number theory, I will not go into details of what exactly the assumption is. I can say if you want, I can say something about this. But there is some mild number theoretic assumption you'll see where it's coming up later. Under that assumption, there's an algorithm for factoring MVP integers. And it uses only something like n to the three halves gates. So it's a quantum circuit with only n to the 1.5 or three halves gates. It needs to apply this quantum circuit multiple times. It needs to apply it into the one half, so square root of n times roughly, independently. And then it takes all the outputs from those quantum circuits and perform some polynomial time classical post processing. That's very easy. And then it gives you the factorization. So for comparison, if you look at Shor's algorithm, it requires a bigger circuit. It requires a circuit with n squared gates. So if you look at this, you're kind of wondering what did we actually gain? I mean, this is n squared. And this is n three halves times n to the one half. So it's the same thing. Well, the thing is the quantum computers are very fragile. And it could be that you might have a quantum computer that can run for that many gates into the three halves and then basically decoheres and basically breaks. So if you have something like that, you can just run that computer just multiple times. So once you can do it, you can run it once, you can just run multiple times. That's not an issue, typically. It doesn't really burn or goes up in flames instead of destruct. You can just run the computer again multiple times. That's not an issue. The important thing is you don't have to keep the calculation coherent for too long. That's really the idea here. I only need to do n to the three halves gates and not n squared gates. That's really the idea. The fact has to do with multiple times is not a big deal. In theory, we can talk about the practice in a minute, but in theory, that's the idea, so that in principle, all else being equal, fewer gates should be easier to implement. Okay, so again, this is a good time for questions. Just like any other time during this talk, let me know if anything is not clear. There are many things I'm hiding, so it's good to ask some questions. Let me say also this was extended to discrete logarithms. I'll be talking about factorization, but you can also extend it and you can see this recent footprint from a Karen Gardner. Okay, also another main default note here. Those estimates assume that using fast integer multiplication in practice, you probably don't want to use those fast integer multiplication because they care big constant. There's a big overhead usually for small numbers, but the similar speedup is there no matter what multiplication I'll go into the news. Okay, so that's kind of the statement before moving on to talk about how it's done. Let me maybe talk about the practice. What does it actually mean? Does it mean we can factor as numbers using existing computers, existing quantum computers? Well, no, the answer is not yet. It's actually not even clear if this algorithm improves in practice on Schultz algorithm. When I say in practice, I mean for small numbers like numbers that we care about in practice, that 2000 bits, if I want to really factorize numbers that are used currently in cryptography like 2000 big numbers, the thing about Schultz algorithm is that it's really amenable to many, many optimizations. If you look at its actual implementation, the constants are very good because it's relatively compared to what I'll show you today. It's simpler. The constants are better. So all those constants, they really matter once you want to factorize number with only 2000 bits. Once you go to bigger numbers, then the advantages, the asymptotic advantages that I'll show you today, they can kick in. But for small numbers like 2000 bits, you might not still see the improvement. So again, Schur is generally highly optimized and it's hard to beat it, especially it's hard to beat it for small numbers like 2000 bits. And I should also say generally just the question is, what is better in practice? It's a difficult question because what is practice? There are actually many different architectures of quantum computers out there. People build some superconducting qubits. Others use neutral atoms. This is a new progress from Harvard and then company called QR. There are people using optics. So there are multiple different limitations and each one has its own limitations. Some can implement certain gates. Some have difficulty implementing some other gates. Some have issues with space, with number of qubits. So basically, there are many trade-offs and the trade-offs are very different depending on the architectures you have in mind. So there's no very easy answer to this question of whether this algorithm is useful in practice because practice is not yet there. We don't know what is the quantum computer that will be there in a year. Having said that, there is one clear issue that I should point out with this algorithm is the amount of space, the number of qubits that it needs. It's a big topic. It's a big issue because at least for superconducting qubits, for some architectures like the one Google uses, for instance, the number of qubits is a big bottleneck. So it's very difficult for them to add more qubits. And the advantage of shore, even though shore needs more gates, even though shore does many more gates, it's actually quite amazing. You could implement shore with very few qubits. So the amount of quantum memory you need is very it's only three times 10 logical qubits. And at least the water show today needs, in principle, many more. It needs to enter the three-halves qubits. You might see why, you know, if you want to know, I can tell you, ask me later why. But luckily, this was recently improved to about 15n. This has worked by Raghavan and by Kuntanatem. So it's not quite as optimized as shore in terms of memory, but I think it's going to be getting there. And I should also say that some of the architectures that people use, like the one I mentioned, from the group of Luki and this company called Fuera, might not be as sensitive as other architectures and it might be easier for them to just get more qubits. It's not really the main bottleneck potentially for them. So in short, this is a difficult question and I'm not an expert in quantum architectures, physical architectures of quantum computers. I just want to show you some of the pros and cons. The screening advantage I'll show you today, a sympathetic advantage in terms of number of gates, but there are pros and cons there. So I mean, I'm running very fast. Any questions? I see some things in the chat. I don't know if it's for me. I can try to look into that. Okay. Oh, okay. So that's not for me, probably. Anything else? Any questions? Should I move on? Okay. So we want to see how the theorem is proven, but for that, I first have to tell you about Schor's algorithm. So this is Schor's algorithm. If you haven't seen it, it's okay. I won't assume you know anything about it because it's actually all here in this slide. So Schor's algorithm is all about finding the periodicity of the function and the function is this exponentiation function. So Schor takes this function that maps z to four to the power z. Okay. It doesn't have to be four, but let's take four, for example, and you take it modulo 8051. This function, this exponentiation function, or this I should say modular exponentiation function, has a period. And if you look at this, you might be able to see there are lots of colors. So what I do here, I kind of use color to represent the value four to the power z. So you know, this is initially, it's four to the power zero. So it's one, four, 16, 64. And then it goes on and on and on and on. At some point, you know, you start seeing there is periodicity, there's a cycle, and maybe you can already spot it. If you look carefully, you will see that kind of around this region, we start seeing the same thing we have here. So this is the point. It's actually 984 where the function starts repeating itself. So there's a period happening here. And I can show you that with this nice animation. So look carefully and I'm going to move this here and place it on top of the original one. I think you can see that it's identical. Okay. So this is, this is where the period happens in this function. So around this region here, let me show you again, you start, you start having a period. Now, once you have this period of 984, once you have this period of 984, you're basically done. I'll show you in the next slide how you exactly do that. Let me just say one more thing that Schwarz's algorithm actually needs to do more than computing this function. So what Schwarz's algorithm, what it does is computing this function actually in superposition. So you have to do it inside the quantum computer. That's important. And you do it in superposition over all the possible inputs up to some bound. Okay. So together kind of simultaneously in superposition, you compute all those values of 4 to the power z, motor to 8051. And then you use the big hammer called QFT, quantum Fourier transform. That's, that's, you know, the main idea in this algorithm and pretty much any other quantum algorithm that gives you exponential speedups. Quantum Fourier transform is very good at identifying periods. And it allows you to identify that 984 is the period of this function. Okay. Once you know that, once you know the period of this function, it's 984, then you're done. And this is really very basic number theory. So I'll go ahead and show you this, but it's probably well known to many of you. Having figured out that the period of 4 is 984. Okay. So once we know what the period is of this number 4 of this modular exponentiation, what it means is that 4 to the 984, right? It's like 4 to the 0. It's like 1. Okay. What it means is that if I take not 4, if I take 2, if I take 2 to the power 984, that number is a square root of 1, right? Because if I square this number, I get 4 to the 984 and that's 1. So that's why it shows 4, because 4 is a square. So it was convenient to work with the square number. So once I know the period of 4, I know that I have a power of 2, namely 2 to the 984. And in this case, it's 1163. And that's a square root of 1. Okay. So going back to Pomerance, you see that having a square root of 1 is very nice. Now it's not a trivial square root of 1. It's not like 1 or minus 1, right? It's 1163. So once I know I have this period, I'm basically done because I have the square root of 1. And once I have the square root of 1, I know that 1163 minus 1 times 1163 plus 1, you just expand it, you get 1163 square minus 1. That is 0, right? Because this is square root. Because 1163 square modulo 8051. And what does that mean? That means that 8051 divides 1163 minus 1 times 1163 plus 1. Which means that 8051 must have a non-trivial common factor with one of them so I can easily extract it by running GCD. And that's really the end of Shaw's algorithm. I use GCD to compute the common factor of 8051. And one of this, maybe this one, 1162, that gives me 83. And I'm done because 8051 is 83 times 97. So again, the idea is to find the period of dysfunction, or I should say maybe it's the order, the multiplicative order of 4 modulo 8051. And from that, I can easily factorize 8051. So that's kind of very briefly, this is Shaw's algorithm. And again, the idea is to look at a function like this, like 4 to the power of z, and figure out at some point it as a period. In this case, around 984. So what's the issue? We're very happy with Shaw's algorithm, but as I was saying, it requires many gates. So let's try to understand how many gates it actually requires. And luckily, the number of gates is really just dominated by this classical part of computing exponentiation. Everything else, like initializing the circuit or doing the quantum Fourier transform, it's actually very fast. So it turns out this is really the only thing you have to worry about. It's a classical computation. Even in number z, you want to compute a to the power z. You have to compute it inside the quantum computer in superposition. That's why it's so expensive. On my laptop, on my phone, I can easily do that. But quantum computers don't have those billions of memory cells yet. Like our phones have. So how do we do that? Well, this is well known. We use a trick to compute this thing. And we have to compute, I should say, we have to compute this function for relatively large z, because we need to be able to get to see the period and the period can be actually quite far. So remember, 8051 is an example. But typically, the number is going to be the module, which is going to have maybe 2000 bits. And you probably have to go really far until you see the period. You would have to go something like 2 to the power little n, little n being like 2000. That's the number of bits you have in the number you want to factor. So z is pretty big. It can go up to 2000 bit number in order to make sure you see the period, right? But duplicative order could be quite big. But once you're, you know, at this point, you'll surely see the period already. So, you know, that's kind of how, how you have to go with this exponent, 2 to the little n. How do you do that? Well, of course, you don't do just a times a times a times a, this will never finish. What you do is repeat a square, right? So this is a, this is a standard trick that you use to very quickly compute that exponent. Well, we'll try to improve with it today. I'll try to show you how to improve with this. But the trick, the way it works in Schwarz's algorithm at least, is that you basically do repeat a square. So you do a, a squared, a to the power of four, a to the power eight by squaring repeatedly. And if you just calculate and I show you the next slide, it takes n multiplications that you have to do basically each time you square, you have to do one multiplication. And each multiplication involves n bit numbers. And assuming we can multiply n bit numbers in roughly n times, this is, this is a fast integer multiplication algorithm. So we know that we can do it in time roughly n or n log n. In total, we get n squared gates in the quantum circuit. So this, this is where the n squared comes from in Schwarz's algorithm because you have to do n multiplications. And each multiplication involves multiplying n bit numbers. So in total, we have n squared gates. If you don't use integer, fast integer multiplication, if you use naive multiplication, you know, schoolbook, it will be n cubed. Okay. But, but let's, it doesn't matter. You can do it either way. But first, let's, let's be, for simplicity, let's focus on this and talk about n squared gates here. Let me show you exactly how repeated squaring works just to make sure we're on the same page. So if I want to compute a to the power, say 29, okay, what I do in this, I start with a square, I get a square, I multiply by a, I get a cube, I squared, I get a to the six, I multiply by a, I get a to the seven, I square, I get f to the 14, I square again, I get a to the 28, and I multiply by a, I get a to the 29. How did I decide on when to multiply, when to square? This is based on the binary representation of 29. Basically, you know, each time I had to add, you know, if I think of the exponents, either add one to the exponent or multiply by two. So it's essentially the binary decomposition of 29. Okay, so this is it for Schor's algorithm. Told you it's n squared gates, right, and the key, the slowest step is, is having to compute this exponentiation. So before moving on, here's an idea, here's a very naive idea. Schor's algorithm takes this a to be a random number. I didn't say that, but I should say it takes it to be a random number. Let's try to be clever. Maybe let's use a small number. Let's take a to be small number. Maybe four. I gave you an example before. Let's use number four. Number four is small. So intuitively, maybe it's easier to work with four, because, you know, it's easier to multiply by four, right? If I want to multiply by four, it's very fast. Turns out, this doesn't help at all. Because if you think about what's going on here, you're doing those n steps and those numbers very quickly are going to become big n bit numbers. There's really not, not much you can do, just even just multiplying them by four requires n operations because those are n bit numbers. And of course, something like squaring the number takes n operations and gates because this is this n bit number that you're trying to square. If you start with a small number, sure, it buys you a little bit in the beginning because this is four, which is a very small number. And no, this is 16, because you square it. But very quickly, those numbers become basically like a kind of generic n bit numbers, you know, more to do your composite. So you actually don't gain anything, almost anything by using a small number. Because very quickly, those numbers become huge, they become n bit numbers, basically. So we don't really benefit much, but you'll see later, it will be really one of the main ideas is to use a small number, but we need another idea to make it work. Okay. So that doesn't help us much. Let's see how we actually introduce another idea and maybe it will help. That's the idea. So what's going on here? That's a two dimensional picture now. Okay, so previously we had like one dimensional, everything was in one dimension. Now I want to do two dimensions. So I choose two numbers, I choose four and nine. No, just four, previously I had four and all the exponents of four. Now I took two kind of two basis of the exponent, and I have a different function. It's a two dimensional function. It takes two numbers, z one, z two, and computes four to the power z one times nine to the power z two. Why? Okay, it's not clear. Or maybe let me all this spoil and tell you this will turn out to be easier or faster to compute for some reason you'll see in a minute, but bear with me. Let's just see why it even makes sense. So what I'm showing you here is a similar kind of plot. Basically each kind of pixel here is the value of four to the power z one times nine to the power z two, right? For some value here, the z one is I think on the y-axis and z two is on the x-axis. Okay. And this function as you can probably appreciate this function also has a period, right? So kind of here, this point here is basically corresponds to one because it's four to the power zero times nine to the power zero. This is one. And I think so you can see my mouse cursor, right? Hope it's visible. Okay. And this is like this point here would be out here is four and 16 and here's nine and 81 and so on. And this function also has a period and you can kind of see it here. It actually has many periods. It has this period here, here and here. So it's a kind of two-dimensional, it's a two-dimensional periodicity. It's not one dimension like before. So there are multiple periods, but that's okay. See the period. And let me show you again the same animation. Let's see what it looks like here. So look carefully and I hope you see the animation smoothly enough. I'm kind of shifting it and you kind of see now how nicely this aligns with itself, right? So this point here is a period and I think it's like 47, something, we'll see in the next slide. So there's a period here and let's see what this tells us and how we use it to factorize numbers. Okay. So again, all I did now, it just did the same thing that chore does, but in two dimensions. Eventually we'll have to go to higher dimensions. It's not going to be two-dimensional, but for this nice picture, I prefer to do it in two dimensions so we can actually see what's going on. Okay. So this is exactly like before. Once you figure out the period, once you know the period is 19,47, you're done. And let me show you why. It's basically the same reasoning as before. Let's just see it again to make sure everything is fine. So once you know the period is 19, 47, so what does it mean? It means that 4 to the power 19 times 9 to the power 47 is the same as the origin. It's the same as 4 to the 0 times 9 to the 0. In other words, it's 1. So we found that this number here is 1 modulo 81. But that immediately gives me a square root of 1. It immediately gives me the 2 to the power 19 times 3 to the power 47 is a non-trivial square root of 1. It gives me this number 6888. This number, if you square it, obviously you get this number here, the 419 times 9 to the 47, which is 1. So this number here, 6888, if you square it, it should be 1 modulo 8051. And once you have a square root of 1, you're done, just like before, because 8051 divides 6888 minus 1 times 6888 plus 1, because if I expand this, I get 6888 squared minus 1, which is divisible by 8051. And then I can recover a factor using GCD. I take a GCD of 6887 and 8051, I get 97. And we're done, right? 8051 is 83 times 97, and we're done. All checks out? Okay, so basically the exact same idea before. So all I told you so far is that, sure, you can take Schultz algorithm, you can do it in two dimensions, and you can see a factor as numbers. But it's not clear if you actually gain anything. If anything, it looks more complicated, right? What's the advantage of going to two dimensions? So let's try to understand the number of gates and try to understand if we actually gain anything, and what we have to compute is this. So in general, it's not in two dimensions, it's in D dimensions, and D would be something like square root of N, you'll see later why, D would be square root of number of bits. Okay, so if I have a thousand-bit number to factor, right? So this would be around 30 something. D is the dimension, it would be like square root of N, and what I want to compute is this. I want to take a couple of integers, Z1, Z2 up to ZD, and I want to compute this product of A1 to the Z1, A2 to the Z2, and those AIs are going to be some small squares like 4, 9, 25, okay? The main idea is this. The way we're hoping to gain is that because you're in dimension D, you don't have to go so far to see the period. You don't have to raise those numbers, AI, to such a high period, and you can kind of see it here. If I go back to this animation we had, the period was not so far from the origin, right? You only had to go a little bit here or here to see the period. I don't have to go to like, you know, 6, you know, some 8,000 far into this function. It's relatively close to the, in terms of some, you know, kind of some distance, it's relatively close to the origin. And that's to be expected because, you know, I have two dimensions, so I kind of explore much more in the same volume, like there are many more points to explore here in the same volume. So it's to be expected, I'll find the period closer to the origin, in terms of distance from the origin. Okay? So basically, and you can see this by, basically by some kind of pigeonhole, right? If you think of how many different exponents or how many different such inputs I explore, if I go all the way up to 2 to the n over D, well, each of them can go up to 2 to the n over D and I have D of them, right? So in total I explore 2 to the n different possible inputs. So by pigeonhole, there must be a collision. At some point I will start seeing a period, right? There must be a collision with powers up to 2 to the n over D. Okay? So previously it was 2 to the n, now it's 2 to the n over D, which sounds like it might help us because it means we don't have to exponentiate so much. We don't have to raise things to some very high power like 2 to the n. We only have to raise to power 2 to the n over D. Okay? Maybe it's an advantage. Actually, once you write things down, each exponentiation requires less, that's true, because you only have to do n over D multiplications, right? We're doing this repeated squaring trick. So I know a square, a square again. So to get those powers 2 to the n over D, I only have to multiply n over D times, which is cool, right? It's great, but I have to do it D times. So it seems like we really didn't gain anything. Okay? This sounds, this seems totally stupid. What I told you now is that's like, you know, extend to D dimensions. Sounds like great idea. We don't have to raise things to such high power. You know, all sounds great, but actually when you do the, you know, actually work out the calculation, I don't, you know, it's not obvious if we gain anything because I still have to raise to this power. So I have to do n over D multiplications, but I have to do it D times. So it's something like n in total and I didn't gain anything. Previously, it was the same thing. We had to do n multiplications in Schroes algorithm. Each one involves like n bit numbers. So it involves something like n gates. In total, we still have n squared gates. Too bad. Okay. So what's the idea? We need just one more ingredient and that's- Sorry. Can I ask a question about the previous slide? So yes, in this process, so 8051 divides the product of two numbers. If one of them ended up being prime, then it wouldn't have worked. Like you find the GCD, it would be one. So I think, yeah. So I mean, it has to be, 8051 has to divide this number just by arithmetic. We know we divide it. I think with the concern you're alluding to, I was trying to hide it. So it's a great grade that you brought it up. It's an occasion for me to mention it. The concern here is that it could happen that this number here, 2 to the 19 times 3 to the 47, it could happen that this is a trivial square root of 1. Okay. This is something that could happen. It could happen that this number is really more to do 8051. It's really just one. Okay. In which case, what you'll have here is not interesting. In other words, maybe this first number will divide 8051 and the other one would be whatever prime or whatever you want. There's nothing to do with 8051. So this is really the concern. The concern is that this is the only thing I did not really justify. And actually, if you go back to this picture, you might see maybe you were wondering why I took this period and not the one here. With this animation, why I decided to take this one here at this position, 47, 19 and not the one here. And that's exactly because the one here happens to correspond to a trivial square root of 1. If you actually calculate whatever this one is, it's like 15. This will happen to be just one, a moderate 8051. So there's a real issue here. And maybe I'll get back to it at the end. And this is this number theoretic heuristic that I mentioned at the beginning. So there's a heuristic here saying that those numbers can behave randomly enough. Those exponents that I expect to find a period that's both short, both close, like this close to the origin, like here, and not trivial. So there are trivial periods that don't help me factor as a number. And the heuristic says that it behaves randomly enough that this will not happen typically. Okay. So I can talk more about this. It's a great third point. Thanks. Thank you so much. Thanks. Thank you. Great. So this is great that this came up. And I might maybe get back to it at the end. And yeah. Okay. So going on, okay, so let me just complete basically the algorithm. And the only missing idea is that there's actually a way to make this calculation faster. And I should say this is really kind of classical. In this point, it's really just about classical computation. There's nothing quantum here, of course, just how to compute this product more efficiently. And it turns out you can do it, but you need something to assume. You need to assume that the ARs are small. So this is the point where it's actually very useful to assume that the ARs are small. For Shaw's algorithm, we didn't gain much, but here it's really crucial. So let's choose the ARs to be small. Let's take them to be 4, 9, 25, 49, just maybe squares of the smallest primes. Those are very small numbers. They will not have n-pids there. They have some long n-pids. They're very, very small numbers. And it turns out once the ARs are so small, we can compute this product here. We can compute it using only n over d multiplications. Not n like we had before. It's only n over d. So really gain something. I should actually be more precise. It's n over d big number multiplications. You also have to multiply small numbers like 4 times 9 or 9 times 25, but that's very easy. That doesn't require many gates. To multiply 4 times 9, even I can do the piece of paper, right? The issue is multiply big numbers like n big numbers. That takes time. And that you only need to do, luckily, n over d times. So this is really how to do this. It's kind of boring, but it's really crucial. Let me show you how this is done. You can give it as an exercise in the computer science algorithms course. But let me show you how it's done. You have to think of it. It's not obvious, but it's also not very sophisticated anyway. So let me show you how it's done. But let me just say once you do that, you only have n over d multiplications. So the number of gates just went down from n squared to n squared over d, which is great. We have the advantage. And I told you I'd choose d to b square root. So it kind of all works out fine. I have n to the 3 halves gates as I promised in the new algorithm. You might have some questions now. And if so, let me know. Questions? Okay. So let me just show you how it's done. It's really just a basic arithmetic. So first, let's just talk about something simpler. If I just want to multiply numbers, say some d small numbers to multiply. So I want to multiply A1 times A2 times A3 up to A8. How do I do that? This is not a good way to do it. This is not a good way to multiply. Because if you do it this way, like A1 times A2 times A3 times A4 times A5, this is not good because very quickly numbers become bigger. They become like d bits because you have d numbers. So they become d bit numbers. And then each time you have to multiply, you have to spend another d gates just to multiply by the next number. This will end up like being like d squared. So you really want to do like a binary tree. You want to first multiply the small numbers together. So A1 A2 and then A3 A4, and then multiply the result together. Turns out maybe you would expect it's actually much more efficient doing this way because most of the multiplications are done with very small numbers because the A's are very small. And I just have to do at the end, at the root of the tree, at the top of the tree, I have to do the big multiplication, but very few of them. So you know, if you just do the math, it's something like d gates or d log t or something like that. So this is how to multiply numbers. And now how do I go to exponentiation? It's a similar idea. You want to combine with repeated squaring. Basically, you know, even if you don't follow the details, the crucial thing is that you always want to multiply small numbers. Whenever possible, you want to multiply small numbers and minimize as much as possible, having to multiply big numbers. So here is how it's done. Imagine I want to compute this thing here, like A1 to the 13, A2 to the 9, A3 cube, A4 to the 6. I start with just number one, you know, so all the numbers are to the power zero. And I first multiply by A1, A2. And again, computing A1, A2 is very simple. Those are small numbers. This doesn't take, it only takes, like, you know, d time to compute. It's very easy to compute those products. So I multiply by A1, A2. I get A1, A2. I square, I get A1 squared, A2 squared. Now I multiply by A1, A4. I get A1 cube, A2 squared, A4. I square this, I get A1 to the 6, A2 to the 4, A3, no, no, no, A3 yet. A4 to the 2 or A4 squared. Multiply by A3, A4, I get this. I square, I get this. And one more multiplication, I get what I wanted. So it's really the same trick, same repeated squaring trick. The idea is that kind of then you want to do the least significant bits, you want to introduce them using those multiplications of the AIs, which you compute as I showed you before using this binary stream. So, you know, if you didn't follow all the details, I'm sure you can complete it offline. The trick again is you want to use small AIs, you want to minimize the multiplications of big numbers. You only multiply the small numbers and then just introduce that into this cumulative register at each time squared. So, you know, trust me, this works, this gives you a faster way to multiply, but there is one important missing detail, which maybe you already noticed because I said something very strange. I said, you only need n squared over d gates with all this trick of optimizing multiplications. But then why not take d2bn? Like, why was I, I mean, clearly the way to optimize this is to take d2bn and not square them, right? If this is really that simplistic, I should really take d as big as possible. So, there's an important detail I did not tell you about, and this is this one here. I told you that it shows algorithm, there is a way to do the quantum fluid transform, and you get the period at the end. Okay, I'm very happy. The same thing is here, but it's a bit trickier because you're dealing with a d-dimensional object and the periodicity itself is d-dimensional. And to extract the periodicity from the results of the quantum computer, you need to work a bit, and you have to use an algorithm created by those three people here. You have to use the lattice reduction algorithm. So, this is Lenshram, Lenshram-Lovas, and you have to use an algorithm that can solve d-dimensional lattice problems. Okay. Again, Shor didn't have to worry about this. Actually, Shor, if you look at the algorithm, there's also some kind of lattice algorithm there. It's basically continued fractions, if you've seen the algorithm. This is basically a lattice problem in one dimension, some kind of continued fractions it has to do. Same thing here, but we're d-dimensions. So, that's why we need to run the quantum circuit multiple times, specifically d times, because that details the basis of this lattice. And once we have this basis, we can use lattice reduction to extract the period from that basis. So, I will not show the details of this. This requires some, you know, some calculation with lattice reduction. But trust me that this works. The issue with this, of course, is that lattice reduction generally is difficult. Luckily, here we can actually do it efficiently using the LLM algorithm. Okay, so it's an efficient polynomial time algorithm. However, the algorithm gives you an approximation factor of 2 to the d. It's not exact. We're not solving lattice problems exactly here. And this 2 to the d hurts us because it means that we'll have to explore more exponents, because we don't actually find the shortest possible period. We don't solve lattice problem exactly. We solve it with the approximation of 2 to the d as a result of which the exponents have to go more than what I told you before. So, I told you before that by pigeonhole, there is a period only up to 2 to the n over d. But actually, because I only approximate the lattice problem, I have to go a bit further up to 2 to the n over d plus d. Again, this d coming from the 2 to the d approximation factor of LLM. And the optimal choice, once you look at this, you can see the optimal choice is d being square root of n, and that gives you 2 to the square root n exponents, which means that you have to use n to the three halves of gates. Because once the exponents are 2 to the square root n, I have to do square root n kind of multiplications of n bit numbers. So, in total, I have to spend n to the three halves gates in this new algorithm. Okay, so it was a bit fast. I hope you can see that there's some trade-off with the performance of LLM. And that's how the n to the three halves really comes about. Okay, this is kind of trade-off. So, I'm slowly running out of time. Let me summarize. Here's the algorithm. We start with small numbers, a1 up to ad. Okay, those are the squares of the first primes, for instance. It could be something else, but we want them to be small numbers. Okay, I should say the fact that we work with small integers, that's why we need to heuristic. If I chose them to be random numbers, you don't need heuristic. You can say there's enough randomness that you can say that whatever period you find is likely to be non-trivial. And the fact that we work with small numbers requires some kind of heuristic, because I don't know really how the small numbers behave, but they negatively model a composite. As far as I know, even using something like GRH will not really help us here. So, we choose those numbers, be small, whatever way you want. You apply the following quantum circuit d times. The circuit simply computes this multi-exponent, computes it in superposition over all those possible exponents together in superposition. You apply QFT, you measure, you get an approximate dual lattice vector. That's really what you get from the QFT. You get some lattice vector in the dual. You use LLL to recover the period, and that once you have the period, as I showed you before, you can factorize it. That's the end of the algorithm. So, I assume you have many questions. I want to leave some time for questions just to summarize with some open questions. Many open questions reduce the memory, the space, the number of qubits we need. Currently, it's about 15N. It would be great to have even fewer. Do an actual calculation of actual number of gains, number of physical qubits. This requires understanding the architecture. There's some work now coming also from Kera and Gartner that's trying to analyze it a bit better. It's interesting to see what comes out of this. One thing I did not mention, maybe one you can try, is to improve the lattice algorithm, not use LLL. That would be interesting to understand this trade-off. Maybe I use fancier plastic lattice reduction algorithms, not LLL, and maybe allow ourselves to use smaller quantum computers. This trade-off. Interesting trade-off going on here. There's something more technical. Maybe I will mention this. One interesting thing that comes up is that you're running the quantum circuit multiple times, squared and times. The way I presented this, it seems like it has to always work, or squared and have to work. But you might think that some of them will just fail because quantum computers, they're not very reliable. So, some of those squared and might fail, turns out there's a way to deal with those failures. There's a way to make the post-processing more robust so it can deal once in a while with a failed outcome, with a failed output. And of course, there's number-theoretic assumption, which I could say more about. Okay, so I'll stop there and I'll take questions. Thank you.