 My name is Jinsung Kim and I'll be talking about a demonstration of a multi-cubit metric based on the output of random quantum circuits that we extend to detect coherent errors in a multi-cubit system. In this talk, I'll show that by simply bending our experimental results in two different methods based on a bending scheme called Bend Output Generation, proposed by Adam Booland and colleagues, we can discriminate coherent noise from incoherent noise in a multi-cubit system. Broadly, we can classify noise in a quantum system as two types, incoherent and coherent. Incoherent noise is stochastic and irreversible, originating from things like T1 and T2 events, and can be visualized on the block sphere as a shortening of the block vector. Coherent noise is repeatable, originating from miscalibrations and crosstalk. On the block sphere, this can be visualized as an over or under rotation. So why are we interested in being able to measure coherent noise in a multi-cubit system? Multi-cubit systems introduce new air sources in the form of crosstalk. In addition, coherent noise can be amplified during quantum algorithm, and so the resulting infidelity can be much worse compared to incoherent noise. Presently, we can detect coherent errors in few-cubit systems using techniques based on randomized benchmarking and tomography. These techniques include purity randomized benchmarking and unitary randomized benchmarking. However, these techniques are difficult to implement in a cubit system square than about three cubits because RB and tomography scale poorly as the number of cubits increases. On the other hand, multi-cubit metrics for a cubit system square than three cubits exist such as heavy output and cross entropy, but they lump coherent and incoherent errors together. In addition, larger cubit systems introduce new crosstalk sources they're not accounted for by few-cubit characterization techniques. So in the next few slides, I'll introduce random quantum circuits and the bend output generation and show how we can bend our results in two different ways to discourage incoherent from incoherent noise in a six-cubit ring. So the key idea here is to run a random quantum circuit and verify how close the resulting output from our quantum hardware gets to the target output. So our circuit will consist of cycles of hard random SU2 single-cubit rotations. So this is just a random single-cubit rotation on each of the six cubits here, and we'll do c-nauts among non-adjacent pairs. And then we'll repeat this cycle by rotating the cubits that participate in the c-nauts. So in a noiseless system, a circuit such as this will drive the system towards what's called a Porter-Thomas distribution. This is this exponential histogram that I'm showing here. And the way to think about this is if we consider our six-cubit ring, we have 64 possible outcomes or bit strings. So the majority of these bit strings will have low probability. This is the front end of this exponential distribution. And a few of them will have high probability. This is the outcomes in the tail of the exponential distribution here. I want to emphasize that the x-axis of this distribution is probability, not the outcome bit string. In the presence of noise, the circuit serves to depolarize the noise, which gives us a uniform incoherent mixture. So all of our outcomes will tend towards having the same probability. This is shown on the right plot here, where all of the outcomes have probability clustered around 1 over 2 to the n. So the key intuition here is that coherent noise still drives the output distribution towards a Porter-Thomas distribution, just not the correct Porter-Thomas distribution. So our bending strategy will be then to see if the output distribution from our hardware is any generic Porter-Thomas distribution versus the correct Porter-Thomas distribution that we pre-compute from our classical hardware. So let's see how the bend output generation works. We'll first pre-compute the ideal outcomes. This is our target distribution with the classical computer. And we'll execute our random quantum circuit on our quantum hardware in order to get our experimental outcomes. For each outcome bit string from our experiment, we'll bend the shots according to either the bit string's ideal probability or the bit string's experimental probability. So let's take a look at how this would work. On the right here, I'm showing another representation of the Porter-Thomas distribution this time weighted by probability. So this is just an exponential multiplied by a linear function for the probability. I've divided up the probability space into a series of bins such that in the ideal case, the sum of the weight of each bin for an ideal Porter-Thomas distribution is equal. So you can see that for frequent outcomes, we have narrow bins and for infrequent outcomes, we have wide bins so that the sum of each bin is equal across all bins. So let's consider the all-zero-bit string. Let's suppose that our classical computer tells us that the expected probability for the all-zero-bit string is this value here, 0.1265. So we look at what bin this probability would fall into according to the expected probability and this would fall into bin 1 here. So this would be binning by the bit string's ideal probability. So from our experiment, we put however many shots we measured of that bit string into the first bin. Suppose that number is 200 out of a thousand shots. So we add 200 shots to this first bin here. The second method of binning would be by experimental probability. So in this case, let's now ignore the expected probability, 0.1265, and only look at the experimental probability, 200 out of a thousand shots. So this probability would tell us that the experimental probability from the shots tells us that we should put our shots into bin 6 over here. So now that we have, so we'll continue this with each bit string that we observe in our experiment and we'll build two different distributions that way. And for each case, we'll compute the distance of the weight of each bin to the ideal case and we can compute a fidelity in this way. So this is just showing us the distance away from the ideal case that our binning scheme gives us normalized by the distance from the ideal case to the incoherent case. So once again, the coherent noise will still drive you to a porotonous distribution. So now let's take a look at some experimental data from IBM Q Boblingen, from IBM Q Boblingen. So we consider a ring of six qubits, Q1, 2, 3, 6, 7, 8. And here I'm just listing some experimental parameters that I've used. So I'm doing 4,000 shots per experiment. I'm doing 40 seeds of random circuits per point and these are averaged together. And so I claimed before by binning our shots in two different ways, we can detect difference in coherent and incoherent noise. This is what I'm showing here on this plot. So this is the same data. I'm just binning the shots in the two different methods and we can see that there's a clear split between the two curves. And this is a plot of the fidelity versus the circuit depth. So let's try and make some sense of these two curves and try to convince you that we are indeed seeing a difference in the coherent and incoherent noise levels. So what I'll do now is run a noisy simulation where I plug in independently measured device parameters for IBM Q Boblingen of these 6 qubits. So in my noisy simulation, I'll plug in my average T1 and average T2 times along with the readout error and average 2 qubit gate time. So the results of my noisy simulation should essentially give me back the coherence limit of this device with no other noise mechanisms included. And so when I run my simulation, this is what the curve looks like that I get back. And as you can see, it matches very nicely to my experimental data where I've been did by experimental probability. This is very nice agreement. So let me repeat this simulation now where I now include the average measured CNOT error measured by randomized benchmarking. I should mention that our gates are not quite coherence limited. So our randomized benchmarking is capturing some amount of coherent error. So when I include the average CNOT error in my noisy simulation, which includes some amount of measured coherent noise, this is the curve that I get. And we can see that once again, it gives us very good agreement with the experimental data been by ideal probability. And this is the curve that I claim that is sensitive to coherent noise. We can actually see that the noisy simulation underestimates the amount of noise in the system. And this is because the noisy simulation doesn't take into account any additional crosstalk between the qubits. Okay. So now what I'll do is artificially insert noise to the system just to ensure that the metric behaves the way that we think. So I'll add coherent noise and incoherent noise. And we'll see that the two curves respond in the way that they should. So to add coherent noise, I'll first fix the circuit depth to 10 cycles. I'll calibrate my CNOT gate. And then I'll add some coherent noise to the CNOT by simply increasing the amplitude of my control drive. So the total circuit time is constant. And I'm simply doing just an over rotation each time I do a CNOT. And this is what I see. So as I increase the amount of amplitude detuning, the amount of incoherent noise remains approximately constant. And we expect this because I am leaving the total circuit time constant. But the amount of coherent noise increases because I'm doing an over rotation. And if you're wondering about the variation in level of incoherent noise, this essentially tracks with the average T1 measure directly after each experiment. So now I'll add incoherent noise. The way that I'll do this is I'll keep my circuit depth fixed to 10 cycles and I'll calibrate my CNOT. Then I'll simply increase the length of my CNOT gate in time and recalibrate my CNOT. So nominally I'm still doing a calibrated CNOT. I'm just taking more time to do it. So the effect of decoherence should be greater. And so this is essentially exactly what we see. The amount of incoherent noise increases as we expect. And this is the data bin by experimental probability. So in conclusion, we've experimentally implemented the bin output generation to characterize a six qubit ring. We've shown that binning by experimental probability gives us a measure of the incoherent noise in the system. I've shown that binning by the ideal probability gives us a measure of the coherent noise. And I want to reemphasize that standard techniques which are able to detect coherent noise will have a difficult time characterizing six qubits. I've shown that our noisy simulation from independently measured device parameters match our experimental data well, and our method correctly detects increases in added coherent and incoherent noise in the way that we expect. Okay, thanks for your attention.