 I really love the topic of the conference and I wasn't sure what I'd talk about, so I of course wrote down the most generic title I could think of that somehow talked to the conference. What I'm really going to talk about is a few of my favorite things. So my most favorite thing is quantum error correction. That's what I think about most of the time. It's not on this list. I'm going to talk about my other favorite things, which are Ising two cubic gates, composite pulses, and the Bernstein-Vasarani algorithm. I wanted to make a list of not my favorite things so I could talk about how much I don't like Grover. I'm not a big fan of diamond distance, but I decided not to do it. What's our real problem? So the way I think about it, our real problem is that we have a few number of qubits and we don't have very good gates. And so it's always, just last week, this number here was 45 or so. Like the number of qubits that we can classically simulate continues to grow. We know that to do algorithms we not only need to have enough qubits, we need to have enough effective gates. So like quantum volume is one way you can measure that. And then if we actually think about specific algorithms, there are of course classical tricks we can apply. And that leads to kind of a swoop up like this, excluding more and more space from where we can show some sort of quantum supremacy or quantum advantage. So I work a lot with IonTrap. So just a brief idea of kind of where IonTrap quantum computers are at the moment. So what's nice is at the limit, you know, there are these beautiful penning trap experiments from NIST which have hundreds of Ion qubits, but limited control. There's this recent paper from Maryland where they have 53 Ion qubits and they do a transversizing model simulation. At full control there are about 10 or so qubits, both at Maryland and in Innsbruck. There are, for two qubit systems, really good gates both at NIST and at Oxford. And actually the Oxford guys is really quite amazing. So in 2015 they had a gate which was 99.9% good at 100 microseconds. And now they have a gate which is 99.9% good at 2 microseconds. So there's no more qubits, there's actually no more improvement in the gate fidelity, but there's a tenfold improvement in speed, a hundredfold improvement in speed. So the thing is we'd like, of course, to somehow reach this region. So first we want to get to some place where we can do some of these variational or approximation algorithms which we imagine somewhere around here. We'd like to maybe get out to factoring and like really like the harder, like the phase estimation kind of quantum chemistry problems which are somewhere out here. Because I'm interested in error correction right now, my error correction goal is basically here which is looking pretty good, I just have to pull these lines together. But I want to talk about gates. So here number of qubits is clear, effective number of gates is obviously a fudge, I mean effective isn't the title. And so, you know, the simplest way to think about effective number of gates is you can say, well, if every gate has the same error, one over the error tells me roughly how many gates I can do, right? So what error is a good question, think about fidelity loss, think about diamond distance? Do all the errors matter? And actually I thought Eddie had a good point which is, you know, if I'm running this optimization thing, if my state's a little bit off, I can still get a measure of the value I want even though the state's not correct to say it. So I think a great, a paper which really inspired me was this paper about can you actually trust quantum simulators? So in this paper what they looked at is they looked at a transfer sizing model and they said imagine you're doing an experiment on a transfer sizing model and you have some error, some fractional error basically randomly shifting your parameters. Now because in a transfer sizing model you have this really nice phase boundary condition from a say a paramagnetic phase to a ferromagnetic phase, if you're an experimentalist you take a point here, a point here, a point here, and a point here, and on this whole plot you claim success because you see paramagnetism on the left and ferromagnetism on the right. If you wanted to do something more quantitative you could actually try very carefully to measure where this critical point is. And what you see is even with these rather large errors, even with these rather large errors that the, you know, you measure the critical point about the same point. And I think that has to do with in some sense the robustness of phase transitions which are a sort of error correction itself. So what I, motivated by this conference I was like well what if I wanted an answer, an exact answer where there's no, no room to fudge anything. It's a bit string which has to be one bit string. So I immediately thought of my favorite thing, Bernstein-Vasarani. And so Bernstein-Vasarani, there's some oracle which outputs, right, this dot product of x and s and our goal is to find s. So classically, right, we can just ask, we can put a one in at every input bit and zero is the rest of the place and that will feed us out s after n queries. Quantum mechanically, we can do it in one step. And there's no, right, there's no, like, there's only one s. There's not a fractional s, we, we need this one step. Now of course the problem with oracle problems is nobody's going to give you this oracle. So we should have some way to implement it. And one easy way to implement this oracle is to have control knots between here the data register and then this target register. And then the computer randomly picks s, randomly programs your quantum computer to apply these things. So, you know, Bernstein-Vasarani is great. We just put a bunch of hadamards around it. We use our good friend phase kickback. And when we measure out here on these, we measure here in the z basis, we just get out s in one shot. And so the question I wanted to ask was, well, yeah, how does this, how does this interact with noise? How does this interact with near term noise? How do we get this s? Are there tricks we can apply to make things better? So again, just for concreteness, I'd say, assuming that all of the gates are bad, but mostly, in most, I think in most experiments nowadays, two qubit gates are worse. In ions in particular, two qubit gates are much worse than single qubit gates and measurements. Two orders of magnitude worse. So they're good. They're like 10 to the minus 2 or 10 to the minus 3. But the single qubit gates and the measurements are much better. So to have the worst case scenario, we want to pick the s, which of course triggers all of these control knots. Now people have been doing experiments on this already. So for example, this five qubit implementation, which was a paper by the Maryland group with Microsoft actually using the IBM quantum experience. One thing which is nice about the Bernstein-Vasarani algorithm as drawn here is that it's very good for a fully connected implementation, but also like a wheel connected object. If I had a two dimensional grid, then the cost of doing these distance control knots become kind of an interesting thing. Actually, since we're on this plot, I also wanted to point out that what I like about this is if you can't do Bernstein-Vasarani, I mean, what can you do? And I think these Clifford Gate circuits are actually a great way to test things because we know what the answer should be. We don't have to worry about some distance between estimates of random output distributions. And then also I think it's a really great way to test kind of the reliability. So ideally you would think, okay, every control knot has the same error, but in fact in your system they may not. And you can actually see that by testing over distributions of different S's. So in this paper they compared this iron trap setup and the superconducting setup. And what you see is depending on the, this is the S that is encoded into the Oracle. This is the S which is estimated. You see they're both good. They both have these nice linear progressions. You can see that the iron trap has slightly better fidelity. What's interesting to me though is if you look at the deviations in these plots, from how the fidelity would look if all of these things were actually independent, you can actually learn a little bit more about your errors. And so I spent most of my time thinking about this. So here there's an error in the classical detector, which the classical detector goes high. There's some probability that its neighbors will also go high from a classical crosstalk. And you can actually see that if you very carefully look at these dips. All right. Now the other thing I really like about these Clifford circuits, and we use it all the time when we think about quantum error correction, is the idea of fault paths. So in a Clifford circuit, if I have poly errors, I can just walk that poly error through the whole circuit. So we did this, so this is built for quantum error correction. We made a slightly different implementation of the fault path. We call the fault path tracer. We actually were working currently on a tensor network implementation of the fault path tracer, which allows us to look at some larger problems. But the basic, but what we did in this paper is we applied it to Bernstein-Vasarani. And then we can actually calculate if you have poly error noise on your gates. Exactly what the probability output should be. Not only just the s, but all the probabilities. And then you can actually use that as kind of a witness of whether or not your quantum computer has these sort of depolarizing noise or it has some other kind of coherent noise that shows up. But the main point is, since I'm measuring here in the Z basis, what would make this go wrong? What would make this go wrong is if there was an X error right before the measurement, which is equivalent to the measurement given either on value. Or if there was a Z error before this Hadamard, which then transforms into an X. Or there's a Z error that came from this controlled knot. Or there's a Z error that happens below the control knot and propagates up, right? So I can just walk back all of the places that the error, that this, right, which should be my value of s, this bit of s, where will it go wrong? Where could it go wrong? All right, so here's just some examples, right? So if there's an X error here, it flips that bit. If there's a Z error out here, it flips the bit. And if there's a Z type error, which could be, say, Y, down on this bottom line of these controlled knots, it, of course, is sort of the most damaging error. And it will flip up multiple bits of our solution. Now, I'm gonna move to the model where the only thing that has errors is controlled knots. Because I think that's basically, again, I think for ion traps it's a very reasonable model. And so this is also my list of least favorite things, is the two qubit depolarizing channel. I don't understand where this comes from. I don't, so let me, I do, that sounds too bad. But no, I mean I understand that I'm taking unbiased sample over all of this two qubit poly space. But I can't write down a physical model, which gives me that naturally. I've never seen any, all of the physical models I've looked at has never come up naturally. Now the single qubit depolarizing model I think is really quite good. Because that's basically the limit where I've lost track of my angles of both the phase and the theta. I think that makes perfect sense. But the two qubit depolarizing model doesn't make any sense to me. But it's the most common thing used theoretically. So what we notice is that xi and xx actually do not affect the algorithm. So there's two out of these 15 things that could go wrong, which reduce the fidelity of the gate, but do not reduce the algorithm, the algorithm output. There are these four, which just flip that one bit of s. And then there are these four, which because they trigger those cascading c-nauts can flip multiple bits of s. Okay, so now a good question of course is, well, I don't believe in this two qubit depolarizing noise, like what do I believe? So I believe actually, yeah. I think in most systems, the two qubit error that occurs strongest right now, right, it will eventually make it lower, so it will be something else, is aligned along the axis of the gate. So I think the key two qubit error for a controlled knot gate is in fact this zx kind of error, which flips one bit. So in some sense, if there's a probability of this happening, it's a little bit worse because this two qubit depolarizing channel has a couple of errors that don't matter. On the other hand, you don't get any of these really disastrous errors for multiple bit flips. Okay, so now of course, why do I think this is true? And this is where we're going to start to talk about easing a couple gates. Actually, so the first quantum computing experiments I did was with NMR, with zz-type coupling gates. And now, we work a lot with Mollmer-Sorinstein-type gates and ion traps, and there it's an xx gate. But it's the same, everything's the same up to a rotation. So basically, this Mollmer-Sorinstein gate on these two qubits generates an easing-type rotation, which is perfectly good for this robust phase estimation, like this is the knob that's in the lab. You basically turn this theta, you see what you get. And we called the Mollmer-Sorinstein gate. There are these single qubit gates whose job is to basically transform this entangling gate into actually the CNOT unitary. Now, as the last talk said, like the key thing in the lab is sometimes you don't know, yeah, Shelby said, you don't know what the angle is. So I think of the error of this gate being a small, right? It's a small misrotation. And I'm going to imagine that the small misrotation either changes from gate to gate fast enough, or just randomly between gates that I actually can't use these characterization techniques to get rid of it. I've done everything I can to get rid of it, but there's still a small little epsilon. Now, because of these single qubit gates are perfect, all they do is they transform that poly operator to this zx operator, right? And that's why I think two qubit gate errors are aligned with the gate axis in some rotation space. So imagine this epsilon is totally random and it basically ends up becoming a stochastic poly model over zx. So we showed recently that it actually has some impacts on error correction. So this is a funny code we call the Bayer 713 code. Andrew Cross pointed out to us because he said that if you had single qubit errors on your gates in a circuit model, it would be fault tolerant to single qubit errors. And then my student, Meeon, led the charge to see, well, what happens if we add in two qubit errors? So what happens is if we add the depolarizing channel, because it's not a CSS code, yeah. Because it's not a CSS code, the X and Z channels aren't clean. And as a result, those things which can kind of propagate up flags later can cause a lot of damage. But if you give us our stated thing that we can have this anisotropic error, then it's fine. And so when we looked at the performance as a function of the physical error rate, if we allowed for the anisotropic error, you could actually see what I think of is real sort of pseudo-threshold behavior where the error is quadratically suppressed below some rate. But if we use a regular depolarizing error model, it's actually kind of interesting to me it gets better than the physical error rate, but just like by a constant amount, right? And so I would say this isn't quite fault tolerant. So there's an example already of if we, particularly in the near term, if we know something about the angle, the axis of the error, we can do something better. Actually, Steve Flamilla has a nice result on generally in codes. If you have like a very anisotropic single qubit error, you can basically, some sense, change the gauge of your stabilizers to improve your ability to find incorrect errors. All right, so now to the next of my favorite things, which is composite pulses. So I actually think in the lab what happens is that this tiny over-rotation is a systematic error, which is slowly drifting in time, as Robin was mentioning. And so as long as we act in a small time slot, even though we don't know what that epsilon is, we know it's basically the same. So this is already been used for single qubit gates in Bernstein-Vaserani. So this is some work that I did with the folks at GTRI. This is just three qubit Bernstein-Vaserani. But you'll notice here that there's a single qubit gates here. But down on this diagram, you notice that actually both of these lines have somehow this purple bar on it. And the reason is, is the way this experiment was set up is the laser was wide enough that it could always see two ions. And so if you tried to center it on just one, you'd always have some spillage to the next. But you knew what that was. And so there's a type of composite pulse called passband one, which allows you to both correct for small changes in the over-rotations and also to cancel the residual error on the qubit, which you shouldn't be talking to. So if you have these systematic errors, you can use quantum control to get rid of them. So composite pulse sequences. Here are just two examples. The idea of these examples is I want to rotate from the top of the block sphere to the plane. And then what I do is I apply a series of extra rotations, which should do nothing. And they allow me to correct this gross error here. So I come down one, over two, this way to three, and back to four to basically correct that piece. So this BB1 is a very good sequence designed by Winpris in 94. In 2004, Aram and I showed how you could expand it to arbitrary order. And then in 2014, Lo and Yoder were able to show how to do it efficiently. So now let's think a little bit more about this. So it's unfortunate anyway. I wish I would have written this review after the Lo and Yoder paper. That's my only concern. So this, so if you, it's easier to think about the error in terms of the sort of Lie algebra. And so if I over rotate, I can think of that as a error, which is along that axis of the rotation. And it will have a distance epsilon theta, this fractional distance. What SK1 does is SK1 applies two rotations by two pi, such that you get closer to here. And the residual error is actually the area of that triangle. And then as we know, it has to do anyway, it has to end up doing with the cross product. And that area has a direction. And the key thing is that it's out of the plane. So instead of being along the axis we saw before, I can now make this error somehow out of the plane. Now BB1, the reason why it's so great is that it basically does two of these triangles, cancels out that area. And then there's some kind of leftover volume, which points somewhere, somewhere on the plane. So Jones showed in 2003 that you could basically take all of these single qubit composite pulses. And if you work on the assumption that the single qubit gates have no errors relative to these qubit gates, you can actually just apply them directly. But what you need to do is you need to find three new representations of, say, SU2 angular momentum, three different generators. And then actually, my students and I showed that we couldn't actually find anything else to do. And then low and yonder, this whole idea of cubitization of simulation, all these things kind of come from this idea that basically the knob of non-commutivity is always like a qubit. So now I can make this SK1 style two qubit gate. But I have a choice, because I can actually control whether this axis is a y-x axis or a z-x axis, which then makes the error, which is pointing out of the plane, point in different ways. Also notice that I've taken basically a two qubit error and used it to turn it into a one qubit error. I also want to point out that I could make a big mistake and push that two qubit error into a one qubit error onto the qubit, which then triggers all of those other errors. So this is a catastrophic choice. We'll see this one is a good choice, and this one's an OK choice. So let's look at those two top two choices. So I could either, now that I've had this composite pulse sequence, I can either have a yi error up here or a zi type error. And then when I push them through my single qubit gates to see what comes out the other side, what I see is that the leading error here is just invisible to the measurement, like the measurement doesn't see it. Here, this leads to an error on the measurement. And so I thought a good way to look at it is if I look at kind of an entanglement fidelity, gate fidelity, what I see is that the fidelity, so this is my, the fidelity of my, this is the over rotation epsilon. So first notice this is pretty gross. This is like a 50% mistake in your angle theta. The beautiful thing about these sequences is they help out to very large over rotations, and over rotations that you expect, you really get a substantial win. Both of these two, so they could tie up two qubit gates, will have, of course, the exactly the same gate fidelity, because at the limit of the algebra, they just look the same. But when I think about the measurement, what happens is that the one that generates the Zi, the fidelity basically directly connects to the error of the measurement. The measurement fidelity, so measurement fidelity in this case is the probability that I don't get the right bit of S. Or sorry, the probability that I do get the right bit of S. And we see that for Mohler Sorensen and this SK1 where you haven't taken care to have the vector go the right direction, it just matches the fidelity. But if we take time to actually rotate that leading order to be in a way that doesn't cause error, we see that it actually improves the fidelity. These guys were together before. And it actually has kind of a neat property that down here, you actually have a gate which has lower gate fidelity relative to the blue line, right? But improved overall algorithm fidelity, which is kind of nice. So there's some, yeah, to summarize that again, these gates have the same fidelity, but by controlling the direction of the error and using the fact we know what error is invisible at the end, we can make the overall measurement error and the overall algorithm error go down. Okay, so now, of course, you should look at higher orders. And I would say in some sense, I really like BB1, but it's too good. So when you look at BB1, it outperforms, of course, all these SK1 sequences. It takes the same amount of time, so you should just do BB1. What is nice is that PB1, which is also a higher order sequence, but takes twice the amount of time. Surprisingly, it turns out that its measurement error exactly lines up with the SK1 error, where I take the time to put the error axis along the right direction. All right, so now, these are not, this sum here doesn't mean that the sum is balanced. It just means those are the two key Li algebra elements. So it seemed like, okay, well, I can probably, I know that this term here, I could shift to be a yx term. And so I tried that. I tried different types of BB1. But although on this plot, it looks like a big gap, it's just negligible. Once you get to that order of error, it doesn't really seem to make any difference. All right. Okay, so now the question is, well, what can you do with more algorithms? And the answer is we don't really know. So I have an undergrad, Daniel Murphy, who's been working with me, where we've shown that if you do an inverse Fourier transform, if you do a Fourier transform measurement in a Cataya style model, we can use all of these same tricks to make it work. If you do it in the more, if you do it in the shore style phase estimation algorithm, basically all of those controlled rotations, which may or may not happen, don't allow you, you don't know what axis you're trying to hit, right? So you can use that for or against you. Now, I think what's nice is of course you can't, you're not going to solve everything with this, but you can actually trace, because if you think of, if I think about the axis of error for just a single gate, I can of course trace that out through the algorithm without, well, depending on the algorithm, without too much cost. So in conclusion, yeah, so to keep it to polarizing channel is not well motivated. If you have a motivation, let me know. I think quantum control, it's interesting because we could either use it to minimize gate errors or algorithm errors, and we should be conscious of that. I think the most important thing is that, and I think maybe it, it's not just the error that matters, right? It really is that axis of error. And so if you don't know anything about the axis, then fidelity is fine. But if you know something about that axis, there's probably some trick you can use. The next thing I wanted to point out is that quantum, that these, it's sort of funny, so when I'm thinking about quantum error correction, I'm always worried, worried, worried, worried about coherent errors. But when I think about near term quantum computers, I think having these systematic coherent errors actually gives us more knobs to play with, because if it's still that systematic noise, I can both minimize it, direct it, do all these things. And then finally, I really do think that although it's going to be great when we can show this clear gap, if I want to show that my computer is working, I should run things I know the answer to, like I should run Bernstein-Vasarani. I should even do reversible classical computing, right? So if I can't do reversible classical computing, I also can't do quantum computing. So, with that, I just, yeah, so this is the group, I make for all kinds of funding from the NSF, the Army, IARPA, and the Humble Foundation. I'm moving to Duke more or less tomorrow. I could really use a quantum error correction postdoc, but anyone in quantum information theory please tell them to send me an application. Thanks for your time. Okay, thanks very much, Ken. Questions? I was trying to really argue this for you with something you said, and I'm still having trouble. So, great talk. What do you have against diamond norm? Oh, so what I have against diamond norm is it's not a good predictor of algorithmic success. I don't even think it's a good predictor of error correction success. And I think the oftentimes gate fidelity ends up being a better predictor. I mean it for bear, I probably mean it for bear circuits as well, because I think it has to do with, so in Bernstein and Vazzarani, the circuit's so simple that you don't have enough time for these coherent parts to get their coherent damage. And then if I think about randomized circuits, then I think that the diamond distance gets scrambled, and so it doesn't really do its damage. And then I think the only interesting counter example was Michael's work with the, where I have only these Z gates in the middle, right? There, these coherent errors, like the diamond distance during that intermediate part probably is the right norm, because it should add up like that. Okay, any other questions? Okay, so if not, I guess let's thank Ken again.