 Thanks Richard and thanks also to the organizers and especially to David Gossett for bringing us all together This is a really great conference, and I'm excited to be a part of it As Richard said I'm going to be talking about characterized incoherent errors efficiently robustly and simply and this is joint work with the theory that I'm going to be talking about is joint with Ted Yoder who is here somewhere and Guanghalo and then the experimental results that I'll be showing We're in collaboration with Kenny Widinger and other experimental folks at Sandia and Robin Bloom Kahoot is maybe here from the Sandia crowd Maybe a couple other people as well Okay, so I think it's a really exciting time to be in this field of quantum computing because we are just Quickly being surrounded by these prototype quantum computers Well because IBM is hosting us I get to put a picture of their device on the cover slide But there are lots of other companies and also academic groups that are building these devices that are on the cusp of Doing new and exciting things But I think the question that we're here to address is what are the challenges? That face us in order to move beyond these prototypes and move towards quantum computers that can do novel things that classical computers can't do And I really like Jay's kind of metaphor this morning that we need to think about this progress not just in one direction in the number of qubits but kind of thinking about increasing our quantum volume in terms of having Cubits that are not just not just large systems, but also systems that are Accurate and think about the many dimensions in which we can improve our quantum systems So for me the kind of access of this quantum volume that I'm the most interested in is how we can come up with good calibration techniques so how we can improve the quantum systems that we already have and Hopefully get below kind of the fault tolerant thresholds that were we'd like to eventually get below Okay, so calibration is a really important procedure in building a quantum computer Calibration basically what I mean by that is you need to be able to quickly and easily tune up hundreds or thousands of cubic gates on the slide I wrote Potentially multiple times a day in some systems you might have to do this calibration multiple times an hour and just I think even this morning we had a demonstration of this Jay was putting up that picture of the 20 qubit system that IBM is developing but saying that well We're still in the process of calibrating this and benchmarking it like this process of calibration can really be an incredibly time-consuming process So I'm really interested in how we can come up with better ways to calibrate quantum gates So when I say tune up this calibration procedure what what I mean by that is we need to be able to Detect and correct in particular what I'm interested in our control errors So there they're always going to be some errors that we cannot control in our system That's why we have you know fault tolerant quantum computing We expect that there will always be errors that we can't deal with But a lot of the errors that are occurring in our system are due to our own control that we are you know Trying to address these quantum systems and we're not doing it correctly But if we could better address these systems if we had more precise control Then we could improve our gates So in this talk I'm going to be particularly interested in how do we detect what has gone wrong with our control Because the the detection is something that's kind of universal across many different types of devices Well once you've figured out what has gone wrong how you actually correct what has gone wrong that that will be different Depending on the lab and depending on the physical system But the detection part is kind of a universal process that can be applied no matter what your building blocks are for your quantum computer Okay, so I'm going to I'm interested in detecting these errors or in otherwise characterizing control errors So there are certain what I'll call does it errata That we'd like any characterization protocol to have We would like it to be easy fast robust and useful And I'll explain what I mean by these words Okay easy Well, maybe that self-explanatory it should be able to be run in the lab with minimal effort But this is actually kind of a high bar to get over because if you think about it every single lab that is building a quantum computer They already have some in-house method for calibration That they've been using and you know It's probably been working pretty well because they're getting up to eight twenty cubits whatever so it's so far It's been working pretty well. So any new method that I devise Even if it's slightly better than the current method that the lab is using if it's really hard to implement like the the cost Benefit analysis is not going to make sense for an experimentalist to totally change the way that they're doing things for kind of a Minimal benefit, but the easier I can make this protocol The more likely an experimentalist will will be to kind of take this and change their current setup and and modify it Time is quantum money the faster you can do these characterization and calibration procedures the faster you can get on to the interesting work of Doing quantum simulation or adiabatic optimization or whatever it is. We are trying to do And if you're having to do this say multiple times an hour multiple times a day You would like to spend as little time doing this calibration as possible By robust I mean that it should be accurate even Under a lack of knowledge about other parts of the system like these experimental systems are very complex. They're very messy That's just totally unreasonable to expect that we perfectly understand all parts of the system So somehow we'd like to be able to calibrate very specific parts of the system Even if there are kind of tangential or related parts of the system that are causing errors that Might affect what we're doing. We'd like to be able to get accurate knowledge about one thing even if we have inaccurate or uncertainties about other parts of the system And especially when we talk about robustness A term that comes up all the time is spam And spam stands for state preparation and measurement errors And that's kind of a typical of this Extra kind of uncertainty in the system. Generally, we're interested in characterizing the gates and gate errors But often there are errors associated with the state preparation and the measurement That we don't know about but somehow we'd like to still be able to characterize what happens with the gate Even if we don't exactly know what these errors are that are occurring in the spam But the robustness can be more general than spam. You can be robust to all sorts of different uncertainties throughout your experiment And finally it should be useful for this calibration Process for me calibration means figuring out what's gone wrong with your control error and then fixing it if you're Characterizing something that doesn't actually tell you about your control error Then it's not actually useful to fix that error Maybe you learn something about whether you have an error or not But i'm not interested in whether an error has occurred or not. I'm interested in actually fixing the error So that information that I get out should be useful to try and fix the error Okay, so what I'd like to do now is give you a kind of a rundown of the landscape of Different characterization protocols and where they fall in terms of these criterion Okay, so ad hoc methods by this I mean kind of whatever is running in the experiment right now Like the lab has been doing something for the past, you know three years to calibrate Um And what are they doing? Well, I don't know it's probably something that a grad student like coded up five years ago And no one's really thought about since then and the benefits of of this characterization protocol are that it's easy it's easy because You know it's already running and it's doing what it should be doing and it's useful enough that you know However, it's doing it's it seems to be doing a pretty good job But probably There's hasn't been a lot of thought into whether it's the fastest method possible or it's the most robust method possible The textbook approach to characterizing errors and detecting errors is a quantum process tomography And I would say this is a really relatively easy Method to implement in that like I mean if you read neilson and tron you can kind of pretty quickly understand what's going on it's kind of easy to to get Go from measurement to information about the errors It is I put the useful kind of with parentheses around it because What quantum process tomography allows you to do is to fully characterize everything about a gate error But a lot of times that's almost too much information When we're interested in fixing control errors Like experimentalists tend to have a couple of knobs that they can control they can't control all every single parameter So we're especially interested in Getting targeted information about the knobs that we actually can control not about every single possible knob that we can't control So this is almost too much information to make it It's like overkill and also quantum process tomography is neither fast and it's very not robust Okay, randomized benchmarking is a procedure that has kind of taken the Experimental world by storm in the last Five to ten years It was really exciting because it's a it's a robust Protocol so it's robust to these spam errors that I was talking about. It's also very easy to implement. It's quite fast If you don't know what Randomized benchmarking is it's not that important But the the key thing that's is important about it is that it's not useful for this Problem of calibration It's not useful because the end result the the thing that you get out of randomized benchmarking is an average fidelity It just basically tells you how bad your gate is but it tells you no information about in what way is your gate bad So in terms of being able to use that information to fix the gate. It's totally unhelpful Okay, gate set tomography is a protocol that has been really developed by the folks at sandia And it is robust which is great but It is not so great on some of the other Things that we'd like it to be so I put easy and useful kind of in parentheses So this protocol is easy in the sense that you can outsource this whole protocol to sandia Um, so you tell sandia some a couple things about your system and they tell you what data to take You send them the data after you've taken it and they analyze it. So that's great That's like a very low overhead for you as a lab to figure out Although there's this huge black box, which is sandia. So you have to really trust sandia Maybe trust robin in particular, which you know anyway And then at the end of the day, what do you get back? At least the last time I checked you get back like how many pages 20 pages worth of material about your system It's html. Okay But some large website about your gates And then if you just want to know how should I turn knob b? Should I turn it to the right or to the left? It's going to take you a while to kind of sort through this and figure out What information do I really need to quickly calibrate my gates? It is also very not fast It seems you know for two cubits you'd have to be taking for two cubit gate You'd have to be taking data for quite a long time So in terms of having a fast calibration procedure where you're detecting errors trying to fix them Seeing if you fix them then maybe fix trying to kind of do this back and forth with detecting and fixing errors It's just not feasible for that kind of use Okay, so sexy adaptive machine learning strategy I'm not thinking about any like one particular strategy But I think the field like there people are coming out with these new ideas For all sorts of procedures that sound really exciting But then if you actually talk to an experimentalist about implementing it Like I dare you to talk to an experimentalist and tell them that you have this really awesome adaptive method and see how far their eyes like shoot out of their foreheads because You know doing some of these things are just not practical to do in the lab So even though these these new procedures are can be fast robust and useful. They're just Too complex there's too much of a barrier to entry and experimentalists often won't be excited about doing that Okay, so the answer what I'm going to be telling you about today Is the protocol that I've been working on robust phase estimation and of course It checks all the boxes. What do you know? It really does okay So let me I'll go through each one. So it's it's very easy for experimentalists to implement The type of Experiments that you need to take to do this procedure are basically identical to robby robby or ramsey sequences That the lab is probably doing already like the lab is probably already doing these sequences in their ad hoc method So i'm just going to have you do the same sequences But kind of analyze them in a slightly different way and hopefully take way less of them than you're already taking Because my goal is to do this fast. It's also non adaptive. It's simple to analyze You don't need to send tons of data out to sandia You can write like a couple of lines of code and analyze it yourself and feel confident that you know What's going on and there's no there's no mystery involved It's fast so this the scaling of of how quickly you can learn Parameters is it's has heisenberg scaling which basically means optimal scaling So it's within a constant factor of the kind of absolute best you could possibly do It's also robust to spam errors as long as those spam errors are not too large and i'll get to that a little bit later Um, but it's actually it's not just robust to spam errors. It's robust to errors in any part of the system And i'll also get to that later too So i said, you know, ideally we would want something that can give you an accurate picture Even if there's a lot of uncertainty in all other parts of the system and this procedure has that And finally it's useful in that it directly learns The parameters that experimentalists tend to have control over so experimentalists tend to have control over say under and over rotations of the gates And some other parameters and these tend to be the parameters that this protocol can Detect so that the types of errors that experimentalists can actually fix robust phase estimation can actually detect and give experimentalist information on how to exactly how to fix those errors Okay So I've been singing the praises of this protocol and i've been saying it's so easy so easy So if it's so easy, I should be able to explain it to everyone in this room in the next, you know, five to ten minutes Without it being, you know, hopefully everyone in the room will understand what's going on by the end of this Okay, so let's let's talk about characterizing a single qubit say over rotation or under rotation So a single qubit gate we can model it as a rotation on the block sphere So it's defined by an axis of rotation and an angle of rotation So let's just assume we don't know we want to know how much are we rotating We're concerned that we might be rotating too much or we might be rotating too little So we'd like to learn this unknown parameter theta Okay, so we're going to work up to the full Characterization procedure, but the first thing I'd like us to consider is what happens if we have a perfect experiment So in a perfect experiment, my perfect experiment is we have perfect state preparation We have perfect measurement And we know everything about this gate. We know it's a perfect rotation We know it's axis of rotation. The only thing we don't know about it is this angle theta Okay, well in this kind of idealized setup What you could imagine doing you can kind of you can choose your state preparation and your measurements such that By doing this the outcome you have a two outcome measurement and the probability that you get outcome zero is theta over two pi And the probability that you get outcome one is one minus theta over two pi So you just repeat this experiment many times and okay, you wouldn't get heisenberg scaling You get stated quantum limits scaling, but at least This type of experiment you could perfectly learn get an unbiased estimate of theta Um and here throughout i'm going to be kind of ignoring there are lots of mod two pies floating around that i'm going to be ignoring Okay, but Let's make this a little bit less perfect in any realistic experiment. We're not going to have perfect state preparation and measurement We're going to have these state preparation and measurement errors So you can imagine this as some unknown Operation that occurs right after state preparation and right before measurement that kind of mess things up And now when we try and do run the same experiment what happens we end up with a bias So instead of getting probability of one Or a probability of zero being exactly theta We're now shifted By some unknown amount delta and that unknown amount is basically comes from our Our lack of knowledge about what's happening with the spam and this is a very realistic kind of situation in in the lab Okay, so the basic solution is we're going to instead of just applying the gate once We'll apply it k times in between state preparation and measurement And now we're still imagining that it's it's a perfect rotation So if we have this rotation that was originally theta and now we were just repeating it k times This is the same as doing one rotation by k theta So in this case the probability that we get outcome zero is now just k theta But again with some bias Because there's you know the the spam is still there it's still acting and now the bias might depend on The number of times we've applied the gate as well Okay, but that's it. These are the experiments that were that you need to run to do this robust phase estimation You need to prepare a state apply the gate many times and then measure so how do we Use these types of experiments to figure out what theta is So we're going to do a series of these experiments and i'm going to track our knowledge of our estimate of theta over time So i'm going to put it on this unit circle So imagine that the true theta is where that arrow is pointing And the first experiment that you should do for robust phase estimation is the kind of standard Apply the gate once and measure and this we said has a bias of delta So if you were to repeat this experiment a couple some number of times You would get an estimate of theta plus delta So let's put that up on the unit circle and it's not going to be close to theta It's not going to be close to the true theta. It's going to be off because of our bias Okay, but what we're going to use this initial estimate for Is to restrict our future estimates. So we're going to say based on this initial estimate We know it's off by a little bit because we know that there was bias in our estimate But we don't think it was so far off like we know we have errors, but we don't think that they're catastrophic errors So what we're all we're going to say is from now on we think that the true value of theta is Kind of not on the opposite side of the circle Okay Then we run the experiment with two repetitions of the gate And this will get us an estimate of two theta plus delta plus some bias And then when we divide this by two and then because of the kind of modular arithmetic that's involved We end up getting there are two values of theta that are consistent With the data from this two rotation experiment But only one of those estimates will be consistent with our kind of prior restriction to only one half of the unit circle So whichever one will be allowed one will be not allowed So we're going to take whichever one was allowed and use that as our new estimate for what theta is but the nice thing here is that the Amount of bias has gotten divided by two So the error has now effectively gotten a bit smaller. So our estimate should be a bit better assuming that the bias is not increasing Okay, so then based on that Based on this two rotation data, we're now going to restrict again and we're going to say that We're going to kind of narrow things again by a factor of two centered around our new estimate And we're going to say from now on we don't think the true value of theta is in this blacked out region Next we go up to four rotations and do the same thing There'll be four values on this unit circle that are consistent with the data from this experiment But only one of them will be consistent with the kind of region that we've blocked off based on our previous estimates So every time you just divide the kind of allowed region by two and update your new best estimate based on Your new data, which will always be biased But you get increasingly Better and better estimates as you go to longer and longer sequences Okay, so this is the whole procedure You run the experiment with k increasing by powers of two And you end up with an error that scales like one over two to the n where two to the n is the largest k that you run And the one kind of trick that I haven't really discussed is how many times you need to repeat the experiment so that you Make sure that the true theta is in the correct region Because just because of probability probabilistic region reasons it could end up kind of in the wrong You could end up in the disallowed region So you need to make sure you take enough to not have that happen with very low probability Okay, and so if we look at the how efficient this procedure is one way to measure the efficiency of this procedure Is to think about the number of applications of the gate and the error That you in your final estimate so in this case we if you have the correct Kind of measurement schedule With c applications of the gate you can get error that's proportional to one over c and this is also known as heisenberg scaling So this is as efficient as you could possibly be We are using the gate as few times as possible to get as accurate an estimate of The rotation as possible, but we don't have it's very simple compared to lots of other situations where you talk about heisenberg scaling Uh Often when people talk about heisenberg scaling they'd have to talk about entanglement or adaptive measurements or Bayesian updates We don't use any of that. It's a deterministic very simple straightforward single qubit procedure Although it can be scaled up to multi qubits Okay, and it's also exceedingly robust I've talked about that delta error that bias is being caused by unknown spam error But if you noticed in this whole analysis, we never said oh this delta this error this bias is coming from spam It actually could have come from anything it could have come from some defacing noise It could have come from non markovian noise We we were totally agnostic as to where that bias came from We just said as long as there's as long as that bias is not like catastrophic We're okay, so it's robust to all sorts of uncertainties in the system But of course this is assuming that that bias does not become catastrophic And so what does it mean to be have a catastrophic bias? Basically the bias must be less than pi over four Which is actually quite large. That's a pretty huge bias and that's definitely reasonable for spam For some of those other bias sources of bias that I mentioned like Non markovian errors or defacing errors depolarizing errors those types of errors as you apply longer and longer sequences The bias tends to increase with the length of the sequence So in that case you would expect that if you have a very long sequence Things get so messed up that you're no longer Kind of in this nice regime the bias gets kind of overwhelms things in that case The the estimate that you get will be accurate up until that point and then it will just no longer be accurate So the the precision will be good until the delta error is kind of overwhelm you And as the as the delta gets larger and larger you need to take more and more samples to ensure that you kind of stay in the in the correct region okay, so But so I've talked a bit about the theory, but you know like I said, I really would These these procedures are only useful if they kind of are good enough that um in in practice in experiments and if they're kind of easy enough that Experimentalists want to use them and think that they're easy enough and better enough than whatever they're doing uh, so with sandia we took some experimental data of this procedure And what this is trying to show you is that we do see that heisenberg scaling so These lines are what we would expect Or the dotted lines are what we would expect from kind of heisenberg like scaling In our estimate where the error is decreasing as the number of operations increases Also, we compared to gate set tomography, which was the the sandia outsourced procedure Which I said was not very efficient And so this is showing that if you want to get the same accuracy from robust phase estimation the blue dots versus Gate set tomography the the green dots you can take many fewer samples So to get you know 10 to the third error in your estimate You can take you know orders of magnitude fewer samples using robust phase estimation than gate set tomography um, and I also say there are several labs that You know did started implementing this procedure not not because they were hoping to get like a publication out of it Just because they found that it was better than what they were doing and simple enough to implement So I think for me that was like the the real proof that this was A worthwhile procedure Let's see. How am I doing on time? Plenty of time. Okay, so there may be a couple things that I've kind of slipped under the rug that I will Mention Okay, so I said that you need to have like your state preparation and measurement Needs to be relatively close to the ideal Okay, so what is the ideal state preparation and measurement? If our gate is a rotation About the block sphere we can imagine the plane. That's kind of perpendicular to that axis of rotation And then we can imagine two States that are orthogonal and both lie in that axis. So we'll call one alpha and one beta And essentially the state preparation that you should ideally be shooting for Is to get close to that alpha in that beta So I've been saying that you just need to do one sequence of Uh, like basically one measurement for each k. It turns out you actually need to do two So that the idea is you do one sequence of a given length k where you prepare in the state alpha And one where you prepare in the state beta and again These don't have to be exact because we can tolerate errors in our state preparation and measurement But you just want to be as close as possible because the smaller your your adult the smaller the bias in your measurement Or the smaller the bias in your outcome the few measurements you have fewer measurements you have to take So you should prepare it in alpha and beta and then you want to measure in the alpha alpha perp basis And then what this says actually that's you to do is so I said you in all these experiments I said you could just learn theta plus delta it turns out actually what you learn is cosine Of k theta plus a bias and then the beta experiments allow you to learn sine Of k theta plus a bias, but with these two experiments You just these to allow you to learn k theta So it's very slightly more complicated than the very simple picture that I was painting but Now I've really told you almost everything the only thing I haven't told you is how many Times you need to repeat each of these measurements to be confident that you're getting an accurate picture And that is a formula that you can look up in the paper if you are interested So I won't go into that but hopefully at this point like this is everything I've told you the whole procedure I've told you how to analyze it so you could probably go home and code it up if you were interested or Start implementing it in your lab. I'd love to hear if you do Okay, uh, so there are lots of future directions that I'm hoping to pursue Um, so I'm currently working on how to expand this to multi cubic gates It seems so far to be very straightforward. Uh, but we're kind of working on some of the details of that um I think it could also be useful like this is a very targeted kind of Procedure and it could be useful for targeting non-coherent errors as well So it'd be great if no matter what type of error you're looking to characterize You could have a very targeted method to do that instead of having to do something like gate set tomography to learn about non-coherent errors and again this Kind of in line with that this procedure might also be able to be used to characterize things like spam errors or some of these other kind of peripheral errors that are affecting our rotation because once we know Kind of what's going on with our rotation. We might be able to back out. What was going wrong with our spam or what was going wrong with other parts of our system And that is everything I would love to take questions So thank you very much. Um questions. Yeah, I'm right here Okay, so you may have said this but maybe I missed it Um, but if the if there's some uncertainty in the rotation axis itself Um, is this does this end up being it seems like then you might be correcting uh for The wrong rotation. So if so if it's just a fixed rotation axis We can kind of actually push that into spam because it's almost like instead of aligning our alpha and beta in the correct plane We're doing it a little bit off But then the spam that doesn't add up over time. That's just a fixed bias So it's not that that type of error basically falls into spam error and can be accounted for with that It's not something that builds up over time. On the other hand if you have kind of a wobble So if it's not fixed, I mean that might contribute to then you wouldn't kind of at some point You wouldn't be at you wouldn't be able to get it super accurate I would think like you'd be able to get it accurate up to the wobble, but maybe not beyond that That makes sense Okay, further questions Yeah You're saying what where that I'm your your question was what are the barriers to going to multicubits? As far as we know, they don't really seem to be any we're just kind of more interested in what are the interesting What are what I'm more interested in this point are what are the knobs that people have for two cubic gates? And how can we directly target those knobs? So that that involves some amount of figuring out like well, what state do you want to prepare in? Which one of those knobs can be most easily target can we target all of them? Those are the more interesting questions. I think there there aren't really any barriers to almost immediately extending this Okay, so another another question at the back right Oh That is true, it was all true it is not a black box I take it back So this is the form of time dependence or nonmarked I mean I think the advantage of rpe is that it's so fast that you could run it multiple times and try to get a sense for the timescale on which Theta is shifting and then that might you could maybe interleave gst with this additional information and perhaps use that to update Your models as you're running a more complex system because as a more a more complex protocol That is so much more time-consuming because this can potentially This is basically as fast as you can hope to get you can at least try and get as much information about that time varying Nature directly, but if you were to if it's time varying on the scale of robust phase estimation, you're going to run into the same Issues that you have it's not immune to those issues Okay, another question right up Yeah, yeah, so that that's the thing that kind of kills us in the end because that those types of errors build up and build up and build up and cause larger and larger and larger biases in like basically larger and larger delta errors and eventually they they become so catastrophic that they cause us to kind of You know in our exclude we basically exclude where truth the truth data is in our kind of Continually having an excluding region. So yeah, there's kind of no way around that But at at the point that you're getting to such large Sequence lengths that things are catastrophic like at that point you're you're not going to be running You know a computation that is that long anyway. So I mean we can kind of You can get accuracy up to the level at which you would probably be interested in running a computation But that yeah, we don't really have a salute. That's a that's a problem Okay, there's a question right at the back. Yeah, no at the very back wall Yeah, that's a Well, so you can still learn about the phase but your accuracy will be you can only learn it to the accuracy of the other errors in the system Yeah, okay. Is there any other other questions? So good. Let's thank she'll be again