 We're delighted to have Aram Harrell, who will moderate the panel on this quantum, what quantum can offer to AI. And you know, Aram is a professor of physics at MIT. He's the founder of the HAL algorithms for solving linear systems and quantum computing. He works on a project, or is a PI for our project for quantum and AI, alongside of Kristen Temm, who is a research staff member, and Peter Schor, who is the Moore's Applied Mathematics Professor at MIT. So welcome. Thank you very much. Great. Thanks, Lisa, for the introduction and for putting this together. We're all excited about this panel. The idea is to have three quantum people and one machine learner. It's not fair. It's stacked from the beginning. We have a conversation about how these fields could work together. One of the last questions you got, which was not from us, was substantial computing power is needed. We hope that quantum computing can be part of that answer. I hope so too. Hopefully we'll get to some ideas for how that might work. And the way I thought we would do this, so I'm going to moderate and introduce the whole thing, and then maybe we'll have a little conversation between the four of us. And I want to leave a lot of time to open up to questions, because I think that many of you probably will have questions about how quantum computers work and what we can hope to get from them. So let me just begin by saying very briefly, if you haven't heard of quantum computers or if you've probably heard of them, I put up a few equations that might be useful. They're pretty self-explanatory, but the idea, the way I like to put it, is I would not explain the power of quantum computing in terms of entanglement, although I think that's a necessary part. But I would say it changes, it's a quantum mechanics can be thought of as a generalization of probability theory. So if you have a classical computer with n bits, that's got two to the n configurations, and we know already it's useful to randomize things on classical computers. We've, both empirically and theoretically, there's a lot of evidence that randomness helps. And you could think of it as you've expanded the state space. Instead of just having two to the n possibilities, a probability distribution is a vector of length two to the n of real numbers that are not negative and add up to one. And what a quantum computer does, and sorry, this lets you explore very large state spaces. So if you want to do high dimensional integration, you're adding up an exponential number of points. It would be foolish just to add them up one by one, you're better off sampling, which really means relying on a probability distribution to explore this exponentially large space. We kind of take that for granted that, you know, this has sort of been, we've learned it so well that we forget that there was a, you know, that this is actually an advantage to use randomness. What quantum does, it says, we have n qubits, instead of two to the n probability, you have two to the n amplitudes. And these are like probabilities and they're normalized, but they're normalized in the two norms. So their complex numbers and the sum of their absolute values squared adds up to one. In fact, their complex doesn't really matter. If it was just positive and negative real, you'd already get the idea. And the reason why that's special is that if you have a probabilistic machine, then you can take different paths through the machine. And if there are many ways of getting into a particular outcome, you just add up all the probabilities that reach that outcome. On a quantum computer, it's the same thing. You add up the amplitudes that reach an outcome. But now, if let's say some of these amplitudes are positive and some are negative, they can cancel out. And so you have an interference phenomenon, like you do with waves. You know what these lights, I can only see the first two roads, so. I hope you, I hope you're getting, I hope you're getting this. They interfere like waves. So when waves interfere, they can interfere constructively or destructively. If they're in phase, let's say they're both positive, both negative. That's constructive. If they're out of phase, the opposite sign, they cancel out. And that's just a richer set of phenomena than you get just by things that add. So light interferes, paint just adds, the colors just add. And so that's sort of the difference between quantum and classical computing is this richer set of possibilities. And in some cases, we know how to do really exciting things with this. So Peter Schor defined an exponential speed up for factoring, for example, that could have a lot of important implications for cryptography. There's also less than exponential, but still big speed ups for things like unstructured search. So if I have a function that has n possible inputs and I want to find which input gives me the maximum. And I have no, I can't use gradient descent. Let's say that there's no useful structure to exploit. Then it sounds like I just need to evaluate it n times. On a quantum computer, I can evaluate it square to n times. So those are the types of things where we know, provably, there's a speed up on a quantum computer. And now the question is, how do you use this power for machine learning? And we have a research project together that's investigating this. The title of this session had a question mark in it, which reflects the fact that we don't really know the answer. And I have a, I'm going to stop using my role as introducer and I'll pass it towards the others. The last thing to say is, I have a perspective about how I think machine learning people should think about the power of quantum computers. Because the quantum computers that we're going to have soon are going to be pretty small. And even in the long run, a qubit is probably going to be more expensive than a bit. So we're going to have trillions of bits just on your laptop. And maybe millions of qubits, that'd be an ambitious thing to have. And so, what can you do with something that is much smaller than your existing computer, but has much greater capabilities? Now one challenge is that we don't exactly know all the algorithms that we get speed ups for on a quantum computer. I gave you a few examples. There are some, there are more, in some cases we know you can't get a speed up, in some cases we know you can. But I think it's possible to leave that to the side for the moment. So suppose you have a thousand qubit device and you know it can only run 100,000 operations. So for a classical computer, that would be very pathetic if you had that. But these 100,000 operations are extremely powerful, right? Each one can process the data in ways that would not be, that go far beyond what a classical computer could do, maybe even exponentially faster. But it's still not very much data. Already that tells you something useful. That sounds like you might want to apply it to a problem like cryptography. We have a small little secret key and you want to extract, you know, you want to break the code. Or maybe something like quantum simulation, which we know it has exponential difficult, classical computers have exponential difficulty to do. It doesn't seem like something you might apply to classifying a billion images because the quantum computer just can't store that much data. And it doesn't even have time to have it stream through the quantum computer. So I think already this tells you something as if you're a machine learning researcher and you want to say what problem might I use a quantum computer for? Already it steers you in the direction of what is a problem that doesn't involve very much data but is very hard. So it has a small input size and yet there's some extremely difficult computation to do. Just because, and then that I think is the opening of a conversation. So once you have such a problem, then quantum computing researchers can say well, here's what we know about how to solve it, here's, you know, we can try and come up with new ideas to improve it. But I think that is a good starting point. Okay, so with that in mind, we could, maybe we can ask Yoshua, you know, what problems can you think of that are, would be important to machine learning, not very much data, less data than a baby. So I haven't thought about this a lot. Yeah. So it's going to be a very naive problem. Yeah. Okay, so two things I've thought about while you were talking. One was planning. Yeah. Because you don't need a lot of data to start with, right? And you only need to have one answer that depends on the result of all those paths being somehow combined. So I don't know if this is an avenue that people have been thinking about. It's like the search problem, but you really want to, you don't want to consider the paths independently, like you would for searching a key, right? There's lots of paths that join back in the future, and you want the sort of a function of all the paths together, right? So it sounds like maybe it could fit. And the other thing I was thinking about is a bit more crazy. So I think of a quantum computer as a machine that does computation, that would be very difficult to approximate with a classical computer, right? So there's an analog on which I've worked, which is a deep net does computation that a shallow net cannot imitate easily, right? So it happens in the case of the deep net that there are functions that can be represented by a deep net, which are really useful. And if you try to learn those functions with a shallow net, it just doesn't work very well. Maybe even exponentially work. Exponentially, in fact. We have theorems about that. So maybe something similar could be applied in the case of quantum computers. So for this to work, you would need to be able to train the quantum computer to learn functions which maybe classic computers can't do easily, but quantum computers can't. I have no clue whether there's such a thing, but you might train them and see what happens. Now the question is, how do you train a quantum computer? I don't know. Maybe, Kristin, do you want to... I think that second question sounds like our paper a little bit, doesn't it? A bit, yes. We've been looking at certain networks, basically based out of circuits that we plan to train classically, where we basically use the quantum computer to represent a little data in a very large feature space, essentially. And I don't know whether this is what you were thinking along, but there are certain properties, what we were focusing on is essentially that would make these querying, or basically getting access to this large feature space very, very hard classically. So that doesn't necessarily mean that it's immediately relevant to a given set of data, but it basically paves the way of thinking along vertical line of sense. Is there a very efficient way of accessing properties of the data in a quantum setting that you wouldn't have access to in a classical setting, basically? Or is there a way of basically representing data in a very high-dimensional space, coming in with little points and blowing that up in a very specific way, which would be computationally too expensive for you to do classically? So why do you want to do this mapping to this intermediate space story? It sounds like a kernel machine. Right, exactly. But say, for instance, suppose someone gave you a particular kernel. That is able to estimate or basically that's able to represent features of your data that you couldn't represent classically beforehand, right? So you could imagine that though a hard-to-evaluate kernel. It's a hard-to-evaluate kernel. You can look at that and looking at that... So even the dot product is hard to... Correct, this is exactly the point. So especially when you look at this, okay, there are many easy kernels. You can look at it, you can look at RBMs, you can do other things. But imagine there might be kernels where you say, I'd like to have a feature space that characterizes something in your data that is very hard to get access to. And in principle, the only way you would know classically of how to do that is to write down your ginormous feature vector and compute the inner product by hand. And so this is one thing that a quantum computer could essentially do for you by estimating those inner products rather efficiently. But how do you figure out what that feature space is? Yeah, that is a difficult question. So what I'm proposing is you learn it. Right. So the way I think about it is not like thinking about some feature space but more a computing machine, right? Its inputs produces outputs like usual neural nets, except that it has these particular computations going on, which would be very difficult to emulate on a regular computer. Right. And now the question is whether the type of computation that is going on there actually is something you desire for solving real world problems. But you could test that experimentally, right? You could test that by training them. Correct. And maybe find some tasks for which they're doing a lot better than the equivalent, not equivalent, but some normal neural net which is classical and doesn't exactly replicate the other guys but potentially could still do a good job on that distribution, on the data distribution. Correct. So in that picture you would say you would take the circuit essentially and you would take the quantum computer, tweak it to match the data. Indeed, this is a very, of course, this is very, very important. So I guess one question that's related to this is I hear about all these qubits. It would be much easier to train these things if they were continuous. Sure. Are there some like continuous quantum machines? So people have thought about continuous quantum machines and this is related to, I guess, coherent states rather than qubits. But it's really, you know, people have thought about it, but it seems really very difficult to make them actually work and. You mean to build them or? No, I mean you could probably build them but to make them actually do useful things. Well, but if you train them, that's how they get to do useful things. Right, yeah. But, you know, this is also why we don't have continuous classical bits rather than we use discrete classical bits rather than continuous classical things. We could build continuous classical things, but, you know, it's much easier to design a gate that just takes two zero and one and zero and one and does something than design a gate that takes a whole continuum of things and another whole continuum of things and so does put us as some continuous, this is useful. So the reason that I'm proposing this idea is because I've been working on something, not quantum, but now analog for analog circuits, right? So one problem with analog circuits is that they don't do what you want. In the sense that the actual devices have very complicated physics that don't correspond to neat things like multiply and add and then take exponentials, but they do something which is a nonlinear computation and if you could only tweak their degrees of freedom end to end to do the right thing, you could train them. So in my group, we designed a procedure for doing this which doesn't require analytical knowledge of the actual physics that's going on under the hood, only that you can measure things locally like what we call sufficient statistics and inputs and outputs and whatever is going on, we're going to tweak it so that it does better fitting the data. And so maybe something like this could be done for quantum circuits, but they would have to be continuous. So there's an interesting reason in terms of continuous variables and quantum computing. There's a particular reason is, as Peter said, like practically also why people like discrete structures in quantum. Also for regular circuits. And that is quantum systems are very noisy inherently. Yes. And if it is discrete, yes. And you really notice quantum systems, right? No, I don't. And the way you fix this is by a procedure called quantum error correction. Yes. And quantum error correction. I wouldn't try to fix it. That's the thing. You guys in the discrete world are always trying to fix the problem by some error correcting schemes. Whereas if you think in a machine learning way, you think, no, I'm not going to fix, to have each piece do this idealized computation. I just want the overall circuit to do the right thing. And how can I tweak the different parts just a little bit so that it does? Actually, there is something that, oh, and no, go ahead. Well, there's something that people have been looking at. So there's been a big focus as quantum computers start to be built. We might have an algorithm paper that takes a million qubits, but we want to say, what can you do with the 50 that we have here today, or 20, or 100, or whatever? And one thing that a lot of people are thinking are called variational quantum algorithms. OK, what is that? So what this means is basically you have a bunch of knobs that you turn to design your quantum algorithms. So the algorithm might be Shino laser, this ion for time T1, then a Shino laser at that ion for time T2, and then adjust the frequency to an angle omega 1. And all these things, T1, T2, omega 1, are parameters. And so you run your quantum circuit, you see how well it does on your objective function, and then you take a derivative. Maybe you have an outer loop. You do gradient descent, or stochastic gradient descent, or I'll probably stop talking about the option here. It's not a trivial part of the question to get this gradient. Not at all, no, no. And the advantage of that is it may achieve some of what you want, which is if the quantum computer is imperfect, and you're just doing your gradients based on the observed behavior, then to some extent what you're doing is you're training for that particular machine and for all of its imperfections. So we may have stumbled into something like that in our goals to find something for near-term computers. Yes, so near-term computers are going to be very noisy, or at least quite noisy. And if you want to do all the machinery of error correction, instead of having 50 qubits, maybe for 50 real qubits, you need 1,000 of 50, sorry, 50 logical qubits, you need 1,000 real qubits. So for near-term quantum machines, we're not going to be able to do error correction. So what we need is these techniques, like you're describing, that we can use the noise to get results rather than trying to fight the noise. That sounds interesting. So actually, I'll say one thing about your first proposal of planning as well. So there is a known, I mentioned that there's a square root speedup for unstructured search. This is one of the first algorithms that got people excited about, along with Peter's factoring algorithm, got people excited about the power of quantum computing. And since then, people have found quadratic speedups for many, many problems, little to no structure. And one of them, so you could think of a unstructured search as taking the or n variables. Is there a target in any one of these n locations? You could also do an and or tree. So you could say, take n variables, or them to get down to n over 2, then and those to get down to n over 4. And what that looks like is it looks like a two-player game. I make a move, and you make a move, and so on. And then at the end, there's some function that tells us who won the game. And it turns out quantum computers can also get a square root of n speedup for that. And that comes a little closer to your planning. This might be planning in an adversarial environment. But if you replace the adversary with some model of nature, there could well be a provable square root speedup. And then we might hope that heuristics would do even better. So I want to make here the connection with something I mentioned in my talk earlier, which is that I guess there's not going to be ever enough qubits to represent the full richness of this thing of the world that you would be planning in. But I think we don't want to do that anyways. We want to build AI systems that are planning in just the right dimensions and aspects of the world that are relevant to the plan. Because I need to plan about how to get back home in spite of the fact that my flight has been canceled, actually. Right. Does that happen? It's a true story. So maybe it's actually a blessing that we have less data for quantum computers. Well, maybe. But I think we need to rethink how planning is done. But I believe we'll be needed anyways for AI. So I noticed our time just decreased a little bit. It's OK. This is the amount of time including questions. I think it's without questions. Without questions. OK. So after this, we have another 10 minutes for questions. I kind of wanted a lot of time for questions. Can we just start doing some questions and then we can jump back? So hopefully you guys have been testing them in. Do you want me to answer this question? Oh, I feel like this is 20 minutes for questions. OK, go. Oh, then maybe we should keep going until we should keep going. OK, we'll use up. So questions will begin in four minutes and 43 seconds. So prepare yourself. So for the planning, so I guess the question is, yeah, how does this work? I know so little about quantum things. How would you do something like what we discussed that you're basically considering an exponentially large set of paths through some state space? Those paths intersect in the sense that they come back to states that other paths visit. And there's some quantities that are going to be computed at each of the nodes on this graph. Right, and even on a classical computer, there's a combinatorial explosion of paths. Right, so in a classical computer, the problem is that the number of nodes is exponential and we don't want to consider all of them. I should say with this wish, you know, I opened with this idea of wishful thinking. Yes. So the wishful thinking is that the quantum computer can explore the exponential in many things in polynomial time. So we know that generically this isn't true. That classical computers, we think, can only do this when there's some structure. Right, if we think that P is not equal to NP, we think you can't do it in the worst case, but we have these heuristics that tend to work pretty often a lot of the time. And so I think something with similarity of the two in quantum computing that we can't, it will not always work, but this gives us an idea of where to look for some structure that we can get a handle on. So current methods used in machine learning involve sampling to the others. And then a lot of the intelligence goes into sampling just the right paths. Important sampling and more. More intelligent than important sampling. Right, but in that direction. Yes, yes, yes, yes. Right, right, right. Basically, you're learning a policy that figures out which direction should be explored. Right, and I guess the question is, what I wonder is if there are models, so there are families of models that are not being looked for because they're too computationally expensive. So it seems like there's a lot of things in machine learning where you train a fairly simple class of models, and it does well because you have a large amount of data to feed in. But I wonder if for quantum we might look for a richer class of models, maybe harder to train on a classical computer, ones that we've even maybe given up on training on classical computers. But where in return we can't train it on as much data. So the only way I could see that this would work is that the quantum computer is not used for doing the whole training. It's only used for one particular computation, which maybe happens on a per example or per mini-match. That's the advantage of planning. You actually need to plan at each time step. And that itself is an expensive computation. If we need the quantum computer to look at the whole data set, then it's hopeless. So let's not do it. Right. But already it seems classical algorithms have thought a lot about that, about how do you sample mini-batches that are used that are mostly randomly. Right. But maybe as you learn, that tells you with which weight to do the random sampling and so on. So maybe the thing that's closest to what you're talking about is active learning methods where you or even exploratory policies where you decide where in the state space you're going to go and get examples. Right. It seems that if you know, I guess we have to think about it seems like one thing that we may want to think about is how a big classical computer can work with a small quantum computer. So the big classical computer is the custodian of a large amount of data. Maybe it can interpret the outputs of a quantum computer as saying, oh, this is the type of data which I need to next feed into the quantum computer to train it. Well, I mean, the other option is to combine a classical part and the quantum part. So let's go back to the hypothesis, and I have no clue if it's true, that there are some computations that are useful and can be only done by a quantum computer efficiently. So maybe we could extend this hypothesis to there are some computations, some functions which have part of it, which is small, needs to be done by a quantum computer, and the rest could be done by a classical. And then if we did this thing where we could train end-to-end the quantum computer, we could also train end-to-end with some other pieces that are classical. And then it might work, even though you only have a very small quantum computer. Yeah, that would be good to look at. All right. All right, we have questions. Oh, that's good. How can I do for quantum? Who wants to take that one on? So OK, I can. So there's various places where, in the actual experiment, for instance, there are certain tasks where you have to discriminate when you measurement data, where you have to tune up gates in a particular way. And there are certain tasks that need to be automated. That is, for instance, a very good place for a classical AI algorithm to help out what the computing machine actually does. So for instance, you could use literally binary classifiers based on that use the measurement data that you get from a quantum circuit to predict whether the qubit state was in 0 or 1 when you measured it. You can come up with circuit optimizers that basically taken data from previous runs and previous tune-ups and basically go and only gradually tweak up the circuit in the next run and the device in the next run. So there's a lot of use of classical AI techniques when you want to basically control a quantum experiment and when you read out data from quantum experiments. So that's actually being done right now. So that's very useful. So is it mostly predicting a few real values or are you into predicting more complicated distributions? So it varies. So there's multiple. I mean, there's multiple. So the simplest case is just, oh, OK. It's just plus or minus 1. Is it qubit 0? Is it qubit 1? When it gets more intricate, it's basically that you have, and in these quantum devices, you have slightly noisy measurements. So although your quantum state was in a given state and you measured the state collapsed and you produced some outcomes for this, there's crosstalk and noise among different qubits. And it's really helpful in that setting to use classical AI techniques to basically filter the classical errors and the classical noise that you have by reading out the qubits in the end after the quantum computation has happened to use machine learning techniques for that. So can you also use machine learning for error correction? For decoders. So people have thought about using machine learning for coming up with decoders for error correction, indeed. So if you go back to the quantum coding setting, there are certain tasks. It works in a particular way where you measure certain syndromes. And then basically there's ways of using machine learning techniques to learn decoders in a way. That is also work that has to do. And how do you get the ground truth for that? The way you would do this in that setting is basically you prepare particular logical states and you know what you want to get out in some case and then train your decoder in that setting. I should mention that the word quantum in some context is just used to mean quantum computing. But in my department, everyone is doing quantum something or other. And so there was recently a meeting in the physics department where everyone who had some use or interest in machine learning came together. And people were using it to analyze LHC data, to predict material properties, to model nuclear physics. It's kind of amazing how almost every branch of physics, and physics is mostly quantum physics, is using AI in some way. So I think even beyond quantum computing, there's a lot of interest in using AI for quantum physics applications. So I guess we don't have to always do the top question. I mean, they're pretty close. And so anyone can at any time grab any of these questions. We could talk about, should we talk about rethinking the computing stack? OK. Peter, you want to talk about that? Yeah. Well, so a quantum computer, so the classical computer stack is very well worked on. Quantum computer, we're part of NSF grant that is going to think about how do you build a quantum computing stack. And one of the things we've realized is that the different pieces of the stack are much more tightly connected than they would be in a classical computer. And somehow, you have to design the stack to take advantage of this. And I mean, I guess one interesting part of it is even there's a lot of classical computing circuitry that goes into it, like the Kristen mentioned error correction, because you have to correct the measure, diagnose, and correct the error, and feed that back in as the computation is running. And then maybe your quantum computer is at one or two Kelvin. And so you have classical circuitry. It has to communicate with the room temperature computer. There's a lot of challenges there. By the way, as apparent, this is machine learning based software also doesn't really fit well in the standard computing software engineering framework, because, for example, the pieces interact a lot with each other. They can do the wrong thing, and yet overall it looks like everything is fine. The traditional ways of testing and independently debugging pieces doesn't really work well with machine learning. So there's also effort to rethink the computing stack for machine learning, which is not trivial. Yeah, and one of the more difficult things about the quantum computing stack is debugging, because if you measure something, you've affected your computation. So you really need to make these very careful measurements that measure certain things don't really affect the computation, which is that's basically a piece of quantum error correction. And that's really difficult to figure out how to do in all cases. All right, I guess it's already an issue with the quantum experiments. If you want to build a magnetic field detector that's extremely sensitive, often the best thing to diagnose it will be the detector itself. So that's definitely a new challenge in building quantum computers. Should we talk about different approaches? Sure. So there are different approaches to quantum circuit development. So I guess IBM, Oxford, Botanics, et cetera. And the question is, does the type have any real advantage for AI over a different type? Well, I mean, classical computers, it really doesn't matter what the bottom layer is. They can more or less do all the same thing. And the same thing is more or less true for quantum computers. The big difference, I think, between the architectures is how easy it is to get a piece of information from one piece of the quantum computer to another. I mean, there are some architectures where, at least for small computers, everything is very well connected. Whereas for some superconducting architectures, you need to send this piece of information to the next qubit, to the next qubit, to the next qubit, and all the way along the chain before you can get it from one side of the computer to the other. But these are going to have a big effect on which algorithms are best for this structure. I'm not sure how that's going to interact with AI. It may also be that for a large quantum computer, some of these differences will wash out. Because the requirements of quantum error correction, we were talking about it before, how it's a little bit like a rocket ship, 95% of its mass is the fuel. A big quantum computer may spend 95% of its time correcting errors. And so in that sense, all the architectures will have to come to terms with that. And the resulting connectivity will look very different than the underlying physical one. So there was a question, can quantum computing be applied to Monte Carlo approaches for Bayesian inference and optimization, which is related to my earlier question for planning? It's sort of a similar kind of question. Actually, oh, good. Sure. So there's some way of, as Aromix explained earlier, there's a method called amplitude amplification. So this specifically makes use of the fact that you have negative numbers and positive numbers. And that can be used in many classical Monte Carlo routines by providing, say, immediately a squared speed up in burning and mixing times. So there's a direct way, basically, if you have a classical Monte Carlo chain, a classical Markov chain that you want to run and you want for various tasks, there's a way of immediately using a quantum computer just by base to speed up, A, the burning time, and B, to actually also get a quadratic improvement in terms of the sampling complexity. So one question I have about this is, so let's say you want to do just a simple Monte Carlo integration. Is the integrand going to be the thing that the quantum computer computes? Because in general, it's going to be hard to put into your quantum computer, no? Oh, so it depends. So yes, so all these basically require really large computation. So as Aaron and Peter mentioned earlier, there's gold for near term. And at this type of speed up, you would look for something that goes further up. When we have the large quantum computers. Correct. When you have a reasonably sized. But then the answer to my question then is yes, that we would have a quantum computer that implements F, the integrand that we want to sample from. Right. In terms of yes, so you have a circuit that essentially queries that function. And then there's various techniques to boost the probability to basically see certain events. So just to add to this, so you might say if you want to sample from something, there's a naive thing, which is you could do rejection sampling. And then something a lot smarter would be like Metropolis, Markov chain Monte Carlo. And when I said the quantum can speed up the OR function by a square root, a similar argument can speed up rejection sampling by a square root. That's not so impressive if the leading classical algorithm looks like MCMC. And so with more effort, you can also quadratically speed up MCMC. And that's kind of a big goal in a lot of quantum algorithms research is, at least at a baseline, whatever the leading classical thing is, we hope to get a quantum square root speed up of that. There's one more thing you can do with this, which we, I think, understand a little bit less well, which is at the end of MCMC, you get a sample from your target distribution. And at the end of the quantum process, when it works, you get something called a Q sample. A lot of our nomenclature or names or institutes, there's Qs all over the place. So what a Q sample is, is I said amplitudes, they're squared out up to 1. So what this is, is you take your target distribution and just square root all the probabilities. And you call those the amplitudes. And what good is that? So one thing, it can give you an ordinary sample. If you measure it, it just yields you a sample. So it at least gives you that. But there's some things you can do that are exponentially more powerful. For example, if I have samples from a distribution and I want to know, is this close to the uniform distribution, this is related to the birthday problem. I need square root of n samples. But if I have a Q sample, I can do it with one sample. I can get constant bias answer to that question. And there are many other things like that. If I want to know if two Q samples are close or far apart, I just need one sample of each instead of a number that grows with the size of my sample space. So in general, can you do the average over a large number of samples in more efficient way than doing them individually and then adding? So let's say you can Q sample from many things. Can you then Q sample from their average? So in Monte Carlo, we normally just do a mean over many samples, right? This is the thing we care about, not necessarily just the individual samples. You can estimate the means also with quadratically fewer samples. So there are, and I think this type of thing, we have not done a group, sort of people have laid the foundation for this, like showing you can do these, these sort of building blocks, but I don't think we've used them yet. And people have not yet quite figured out how to use these building blocks. So that would be, I think really cool if we could connect those better to applications. We keep skipping this. No, no, we answered the purple one. Should we answer the floating? I mean, the floating point, qubits can represent whatever you want. I mean, qubits can represent JPEG if you want. But the question is, can you have a continuous version of a qubit? Oh, so there are two ways of doing floating point. So one of them is, if, you know, let's say I ask floating point number on my classical computer, 64 bits, I can allocate 64 qubits to that. And then it's gonna be in a superposition of different values. It's just probably not the best use of my qubits, but I certainly could do that. Then, I guess we talked about this earlier, that there are these sort of quantum computers with continuous degrees of freedom. Error correcting them, as Peter said, is harder, not impossible. The Yale group with their superconducting qubits is really pushing hard in this direction. But what they're doing is sort of at the lowest level, they might be using continuous degrees of freedom with some error correction. They're using these to then encode logical qubits, which are then used in the calculation. I see. The issue is, if you have, and here's the same for analog, if zero and one are the only allowed values, if you find your computer says 0.98, you can push it back to one. If any number between zero and one is legal, and your computer is 0.57, how are you to know whether it should be 0.56 or 0.58? Well, you might have a prior. Right, but it's, there may be things you can do. And also, maybe AI algorithms are inherently a little noise resilient. So, I'm not saying that there's a, maybe what I should say is, not that you, I'm just saying you can't use the floating point, I'm just saying, yeah, we have not come up with good answers yet on how to use it. So, shall I take this one? In July, an 18-year-old Ewan Tang proved that classical computers can solve the recommendation problem nearly as fast as quantum computers. Thoughts? Well, I mean, my thought is, you know, we had a very clever algorithm for the quantum computer for solving the recommendation problem, but it turned out the recommendation problem wasn't anywhere near as hard as we thought, and the classical computers could do it. So, I don't think this shows that, no, in general, classical computers can solve problems as quickly as quantum computers. I think it shows that we should be more careful about claiming certain problems are hard when we find an algorithm for them on quantum computers. And this has happened before. We found, you know, Ed Farry found this QAOA algorithm and showed it could do something that wasn't known how to do on classical computers, and then, oh, no, a bunch of, I think half a dozen. I think 10. 10, really smart computer scientists came up and found an algorithm for doing it on classical computers. Yeah, actually, what happened is, he asked them first, what is the best approximation ratio you can get for this particular combinatorial optimization problem? And they said the ratio is such and such. Then he came with a quantum algorithm that beat it, and then they read his paper, and Scott Aronson blogged about it. They said, oh, okay, well, you know, we hadn't really thought that hard about the problem, and then they found a way to match the performance, actually, to beat the quantum algorithm. Then the quantum algorithm proved, and I think now they're tied. So, yeah, there could be a fast classical algorithm for factoring, maybe someone knows it already. You never know. What can a qubit do? Well, I guess I, yeah, I think I already took a sort of stab at it with the amplitudes. I guess I'll just say that if then, it's not so much one qubit that's interesting, but n qubits, I don't know if Peter or Kristen want to say more. So, yes, so I think what is distinguishable for a qubit is exactly its ability to interfere. And a qubit by itself is boring. Having n qubits gets really interesting because then you can start interfering more things. So that's why it comes. So, exactly, so coming back to what Aron said, it's essentially the ability to have positive and negative amplitudes that interfere, cancel each other and amplify certain things. For two bits, that's not very interesting. For n bits, that gives you two to the n numbers, you want to amplify something, then it becomes actually very interesting indeed. Curio optimization. So I think this maybe refers to the D-wave problem or the problem that the D-wave machine solved, which is an NP complete problem. It's just minimizing a quadratic function of n bits, which is NP complete. And the answer is that we have heuristic algorithms for this, so what do we have two things? So first we have a provable square root speed up over many classical algorithms, not just brute force search, but if you want to do like a tree search on a classical computer, backtracking tree search, you can square root that other classical algorithms we can do quadratically faster. Plus we have heuristics like the adiabatic algorithm that appear qualitatively different than classical analogs like simulated annealing. So we would expect the instances in which they do well to be different, not necessarily a super or subset than the ones on which classical do well. And we don't think that they get an exponential speed up in every case, but they may get it in some cases. And I think in part this awaits smarter theorists, but I think it really, like with classical computers, it awaits empirical testing. So I'll take the last one. When will quantum computing be able to break current cybersecurity encryption? So what we need for breaking RSA and some other things is a few thousand logical qubits. So since we already have maybe 20 or 50 logical qubits it doesn't seem like a few thousand logical qubits will be that far away. But these qubits are noisy and to get a few thousand logical qubits you need a few hundred thousand physical qubits. And for these factoring algorithm you need them to be really fast because you want them to factor in, oh, less than 30 years. So no, it's only first if you believe in a quantum version of Moore's law it's still probably a couple decades away. And we don't really know whether the quantum version of Moore's law is actually working yet because we just have a few points on the curve and you draw a line through them. It looks like it might be, but who knows? Right, it could be that with a modular architecture though once you build a 20 cubic computer that has three links to other computers it gets to the point where someone could throw a lot of money at the problem and quickly jump up. But oh, we have a question from Charlie Bennett in nine seconds. And the answer is yes, you can use Grover search because Grover takes a lot of iterations and in the inner loop you don't need to store the whole data set it can just stream through the quantum computer and I'm writing a paper about this now. I can take that back, negative time. Okay, thanks everyone. Thank you.