 Hi everyone. So this work is cryptanalyzing an isogeny-based crypto scheme called Psyche. So we take a bit of a long road to get there. And to put this in context, right now NIST is standardizing post-quantum crypto schemes. To say that something is post-quantum means that it's secure against a quantum adversary, meaning that any algorithm to break it is infeasible to run on a quantum computer. To even make a statement like that, to claim something like that, we need to be able to decide how much does it cost for a quantum computer to run certain things. So everyone in this field sort of has to have a model somewhere in their head that says what a quantum computer is and is not capable of. So how do you imagine a quantum computer? What does it look like? What can it do? Is it actually just a classical computer, but maybe you've sprinkled in some magic quantum operations in addition to the other ones. And so maybe you think you'd like to do a bit better, and so you think there's this enormous body of literature already existing on quantum computers, and you want to draw from this. So here we're going from realism to generality in these models. As cryptographers, we can draw from these very general results in complexity theory and query costs, but we end up having schemes that are too big because this is too conservative. So we think maybe instead we draw from these realistic approaches where people are worrying about what can we run on quantum computers very soon on immediate technologies. But these results don't worry if a new quantum computer will come along that their algorithms don't run on because they'll just write new ones. But as cryptographers we need to worry about that because if our security was invalidated we might have an insecure scheme. So cryptographers need to be somewhere in the middle, and that sort of gives the the overall context for what we're going for with this work and how we tried to analyze psych. So in this talk I'll give some of our motivation and what we tried to do with our new model for quantum computers and then introduce the main contribution of our paper, this memory peripheral framework, and this framework accommodates different cost models. We provide two. We use those cost models to analyze psych and I'll conclude after that. So the first thing we want to be able to do is to fairly compare a classical and a quantum algorithm. So we want to be able to say if something is easier to attack with the classical algorithm or easier to attack with the quantum algorithm. And sometimes there's algorithms that have both classical and quantum components. And this is a problem that has come up before and is well known. So there is this quantum collision finding algorithm by Brassar, Hoyer, and Tapp that requires a lot of classical memory with quantum access. And Bernstein argued that if you fully account for the memory cost then actually a classical algorithm is less expensive. Another thing we want to incorporate into this is a view that gates our processes. So in a quantum circuit diagram you have different wires and they represent qubits and time goes from left to right. So when you look at this what's happening is each of these symbols represents a gate that is applied to that qubit at that time. These deliberately resemble classical circuit diagrams where each wire is a bit and time goes from left to right. The thing about the classical diagram is the diagram represents this formal Boolean circuit model that doesn't account for space at all. But typically we equivocate between time and space because we think well we can take this circuit and we can print it on to a physical chip. And when we do that the signals will propagate from left to right through the gates as time progresses. So time and space intuitively feel equivalent here, and we don't usually worry about this. But that's not formally implied by the model and it's especially not implied by the model of quantum computing. Where this may not hold because the technology may not have this property and right now the promising quantum technologies do not look like the classical two-dimensional circuits. It's that the qubits themselves are stationary, they're not a signal moving, and you have to apply a gate to it. So we want our model to accommodate that. And finally we want to include error correction. So in classical computers we have error correction and usually we don't worry too much about it because it's not that much of an expense. But in quantum computers it's a bigger deal and in fact quantum errors are just fundamentally more complicated than classical errors. So we need to include this in the model if we want an accurate reflection of cost. So this leads us to our memory peripheral framework. And the main idea is we want to model a computation as having some physical memory and some memory controller that acts on that memory. So as some examples, a Turing machine fits in this if we view the tape as the memory and the head as the memory controller. A random access machine also fits in this. The CPU is the controller and the RAM is the memory. For quantum circuits we can use a random access machine, classical, as the controller and use qubits as the memory. We have three main premises for these. One, that the memory is a physical system that can change over time. Two, that the memory controller interacts with the memory. Number three, that the cost is the number of interactions. So I'll go into these in more detail. So because it's a physical system, memory is a physical system. It will change over time and we can model this in different ways. We can imagine that when we're not intervening with it, when we sort of just let it do its own thing, it sometimes might change. And this could be noise, it could sort of degrade over time, or it could even do computation if we've set it up very carefully. Alternatively, it could change because our memory controller has done something to change it. And we consider these the costly changes to memory. More specifically, we model a quantum computer as a parallel random access machine that has these qubits associated with it. And so take the RAM model, take your favorite instruction set, and add in some instructions that are effectively applied some gate to some qubit at some time. And the effect of this is now a quantum algorithm becomes a classical program. And so this is what this looks like. This is one way that you can imagine what a quantum computer looks like. You have a big lattice of qubits and to every qubit, or maybe every small group of qubits, you've associated a classical controller that does things like correct errors, applies gates, performs measurements. And all of these classical controllers are coordinated by some central memory controller. And viewed in this way, this gives a cost of quantum algorithms in the number of interactions. So a quantum computer will definitely be very expensive to build, will be expensive to keep it cool and maintain it. And we ignore all of that. We only focus on the cost to the classical controller and the computations it must do. And you can think of this as an opportunity cost. We have this big classical computer that we need to run the quantum computer. What else could we do with that classical computer if it wasn't busy running the quantum algorithm? And so this is a very general framework and we could fit different cost models and different assumptions into it. And so we provide two different assumptions which lead to these two costs, what we call the G cost and the DW cost. In both cases, we imagine qubit memories and standard quantum gates. And the difference between them is our assumption on error correction, whether it's passive or active. And passive error correction is roughly nonvolatile memory. So you can imagine if you have a piece of paper, you write a bit on it, you keep the paper cool, your bit will last for a really long time. Similarly, if you have a magnetic disk, you can write a bit to it and just keep the disk cool and you're fine. Active memory needs to be continuously refreshed to preserve it. So DRAM should be a very familiar example and quantum surface codes are another example. On this slide, there's no example of a passively corrected quantum memory and this is not an accident. So imagine this quantum computer and it has a lattice of qubits in some number of dimensions and these qubits are limited to local interactions in this lattice to try to correct their errors. And so can we build different kinds of memory in this? We know how to build actively corrected memory in dimensions two and higher. Surface codes are an example of this. We know how to build passively corrected memory in dimensions four and higher with a similar construction to surface codes but dimension three and under is an open problem. And there are actually an impossibility result for a large family of error correcting codes that includes surface codes and three dimensions is almost totally open. The thing to remember about this is the dimension that we're referring to is the lattice of qubits in the computer. The physical dimension of the computer that we built were limited to three dimensions. That's the universe we live in. So if we want to make a passively corrected quantum memory, we have to solve this issue and right now this is open. So if we want to take this g-cost and assume a passively corrected memory, we have to make a pretty strong assumption that our universe even allows this to happen and that at some point we will figure out how to do it. But this might be possible so we can make this assumption but we don't need to spend any computation to preserve memory. We only need to spend computation to change it and this is one RAM operation per gate is what this would be equivalent to so the total cost is just the number of gates so we called it g. Actively corrected memory is going to be more difficult because at every qubit, at every time step we have to do some computation to fix the errors that have occurred. So we end up with one RAM operation for qubit per time step and the actual number of RAM operations for the algorithm is the depth, the number of sequential gates times the width, the number of qubits hence d-w cost. So looking at these we might look at high memory algorithms to try to see somewhere where these two will be different and isogeny-based cryptography is where we looked. And so to give a very brief overview of isogeny-based cryptography, we have a large prime p, maybe it's got 434 bits, and we have a public parameter e0 and a public key e over a. And it lives in some graph and this red path is the secret key. You can find this red path, you've broken the scheme. So this is vulnerable to a meet-and-the-middle attack. We look for paths going forward from the public parameter and paths going backwards from the public key. And we look for a collision between these two paths. So hence the quantum algorithm that was previously viewed as the best attack is a generic collision finding algorithm by TANI. So it uses a random walk on a Johnson graph. So a Johnson graph is you take a set x and you make vertices out of every subset of a size r. And they're adjacent if they differ in exactly one element. So if you take a random walk on this graph, this is completely equivalent to taking your set, removing an element at random, and inserting a new element at random. And that will be one step. So for TANI's algorithm, you make one Johnson graph out of the set of paths going forward from the public parameter and another Johnson graph out of the paths going backwards. And so you take a random walk on both and every time you insert a new element you look for a collision in the other set. And you take this whole thing and you make it quantum with a standard construction very similar to Grover's algorithm. So what happens with this is that the length of the random walk, the random walk gets faster if the sets are bigger. And TANI optimizes for the query optimal where you actually have to balance that with the setup cost to originally construct this list and balance that with the total length of the walk. And this is where the size of the set is proportional to the number of queries, which is proportional to the time. For isogeny-based cryptography, for psych, these are all equal to P to the 1,6. So previously the security was given as P to the 1,6 for the prime P. But these use a lot of memory, this algorithm. And quantum memory is unusually expensive. So imagine you've got this array of memory and these cells at the bottom, you can imagine maybe this is a full chip of memory or maybe it's a hard drive or maybe even a tape, and you've got some circuit to access them. So we have a classical query. We want the ninth element of memory. We can follow this red path. And at each point the memory controller can look at the input and say, I only need to take the left path, I don't need to go to the right, I don't need to use any of those gates, I don't need to turn on any of those memory addresses, and so on and only needs to spend log n gates to accomplish this task. With a quantum query, it might be in superposition. And so the classical controller cannot tell what memory access they need to return because they cannot read anything about the input without destroying the input. So what they have to do, the memory controller has to apply gates for every possible input because they don't know what they're getting and it might be a superposition of every possible access. So this is now a linear number of gates in the size of the memory. And this should actually be very familiar to cryptographers because making a fairly loose analogy here, a quantum state kind of needs to be side channel resistant because if any information leaks about the state, that will decoher and destroy the state. So the circuit that we use to operate on that state has to, again waving my hands here, be sort of perfectly physically side channel resistant. And so we need circuits like this one that work for all possible inputs. And so this can give us a cost for memory in terms of gates. If the memory is idle, if we're not actually accessing it, of course if it's passively corrected it's free and it wouldn't be free if it's actively corrected. And for random access we pay at least a linear cost in both of these cost models. But for Tanny's algorithm it's actually even worse because we need some structure to this data to facilitate insertions and deletions. And for quantum algorithms we need interference. We're doing a random walk on this Johnson graph and what we need is that for particular vertex, the representation of that vertex in our quantum computer cannot depend on the path that we took or else it will not interfere with other paths led to the same vertex. But on a Johnson graph a vertex is a set and a path is a sequence of insertions and deletions so we need insertions and deletions to lead to the same representation of the data. Our favorite data structure classically to represent a list, to represent a set is a binary tree that we would implement as a linked list. But this doesn't work because we have fragmentation, we have the layout changing depending on the order. So for history independent data structures the previous quantum approach was what's called a quantum-radix tree where you take a binary tree and make a superposition over all inputs. But we could also have used a sorted array and so if the elements in an array are physically in order usually we don't like this because if we want to insert in the middle you have to move everything after that element down. But we're already paying a linear cost for memory access so this becomes more appealing. So this is the data structure we provided in our paper we called it a Johnson vertex and it's just a sorted array. And so to insert into this I'll walk through what this looks like. We've got all these elements in the bottom here and we also have an Ansel array of all zeros. So the first thing we do is we fan out an input x so we use this tree to copy x to every element. So this has logarithmic depth and your number of gates. We end up with x in every element and we compare all the elements simultaneously with the one above them to see if the element is larger than x. And we end up with zeros and ones in the top row. The shift between zero and one happens exactly where the element in the array became larger than x. So that's exactly where x needs to go. So we use those bits to control a swap that goes up and to the right and that ends up shifting the right half of the array up and to the right and then we swap them back down, controlled on the same bits and now we have shifted half of the array down and it's only took two steps of depth and we've inserted x into the right spot. Now we need to uncompute. We do a comparison. We fan out x and we have nothing. So we have correctly inserted into our array with this structure. And so this is actually, this Johnson vertex is actually the lowest gate cost to do the things that we need a quantum data structure to do for these random walks. So we go back to Tany's algorithm and previously it was this p to the 1,6 query cost that used p to the 1,6 memory and so immediately this has a dw cost of p to the 1,3 but using our data structure we show that the gates are actually also p to the 1,3. Now maybe this isn't fair to Tany's algorithm which was optimized for queries so if we re-optimize for g cost or dw cost we can bring both the gate and dw cost down to p to the 1,4 but this is actually the same as Grover's algorithm, up to polylog or adenic factors. So here what we've shown is that Tany does not provide anything beyond a polylog advantage over Grover for either of these costs. To give a little more intuition on why this makes sense we use an argument of Grover and Rudolph and they say that if you've got this big quantum computer so for Tany's query optimal algorithm we've got p to the 1,6 qubits we're ready to apply any gate we want to any qubit we want at every time step so we could do anything else with these qubits that we wanted. We could group them together into a little arrays and run Grover on each of them. So now we've got p to the 1,6 copies of Grover's algorithm they're all running in parallel and we do the math on this and they find the isogenic in time p to the 1,6 so this is the same time as Tany's algorithm so this is saying that the hardware to run the query optimal version of Tany's algorithm could be repurposed to run Grover and get the same time but it's actually even worse because if you remember a quantum computer has a classical controller and so it will have p to the 1,6 classical control processors associated to all the qubits what if instead of running the quantum algorithm we sort of turn the qubits off and we repurpose the controllers to run Van Orschel Wiener so this is a lot of classical controllers this is p to the 1,6 when they run Van Orschel Wiener they won't find the isogenic in time p to the 1,8 so our conclusion here is that if you had an adversary who actually built this enormous quantum computer to run Tany's algorithm they have implicitly built such a large classical computer to run this quantum algorithm that they'd be better served just using the classical computer to begin with and so this is the big conclusion of our paper but our main contribution here is this memory peripheral framework and so what we want you to take away is thinking of quantum computers as peripherals to classical computers and thinking that everything you do on the quantum computer was controlled by the classical computer and you can think in terms of those classical costs which are now directly comparable to any classical algorithm and so from this you can give a linear gate cost to memory access for quantum algorithms and even further if you're skeptical about this passively corrected memory you can use this DW cost and give it cost to the identity gate so that's everything thank you for your listening if you have any questions please come down to the front to the microphone if there are no questions we'll move to the second talk let's thanks the speaker again