 Hi, welcome to What's Next, a seminar series presented by IBM Research, in which we spend time with our researchers learning about the exciting work that they do. I'm Shaheen Parks, an innovation strategist, and I'm excited to be here today with Dr. Micah Takita, who will be discussing quantum error correction with us. Dr. Takita is a research staff member at IBM Quantum with expertise in experimental quantum computation. She joined IBM in 2015 from Princeton University after completing her PhD in electrical engineering. Dr. Takita specializes in the control, characterization, and benchmarking of multi-cubic quantum systems. Today, Micah will discuss her work in taking quantum error correction from theoretical to practical application. She'll first give us a very quick tutorial in what error correction means in this context, and then talk with us about innovations on both the hardware and software side that are enabling practical experimentation. She'll then take us through some experimental results and show us how these relate to IBM's quantum roadmap. With that, I'll hand it over to Micah. Thank you, Shaheen, for introduction. Today, I will tell you about how to make quantum error correction more practical. So here's an overview of my talk. First, why do we need quantum error correction? Quantum systems are inherently noisy, and we need a way to deal with these noises. The way classical computers also use some type of error correction, I'll discuss some similarities and differences when working with quantum systems. I will then introduce you to a popular quantum error correction code, a surface code, that is ideal for near-term to realize with current hardware technology, and tell you more on the innovations IBM has made in co-designing new heavy-hexagon code that is more practical when building the hardware. The main focus of this talk will be on the progress we have made in superconducting qubits at IBM, and recent demonstration of the decals-2 error detection code that we have done using one of IBM's 27-qubit Falcon processor. Finally, to wrap up the talk, I'd like to discuss how IBM Quantum hardware roadmap aligns with future work on quantum error correction. Faulty quantum systems require some type of error correction, but first, what do I mean by faulty? Quantum systems are inherently noisy and easily influenced by incremental fluctuation. Some typical errors in quantum systems are qubit decoherence, which is a loss of quantum information. So here is a block sphere representing single qubit with zero state on the north pole and one in the south pole. T1 relaxation time is decay from one to zero state as you see here in a big blue arrow. Qubits not only have zero and one state but can also be in a superposition state where phase of the state also carry information. T5 defacing time is randomization of the phase phi as you see here in green arrow, and these two, T1 and T5, contributes to T2 coherence times. Both T1 relaxation time and T2 coherence times have improved over the years. As you may have heard how recently, we have reached average of about 300 microsecond T1 and T2 on a large 27 qubit Falcon processor. Nonetheless, this is still a large source of error in our systems. Other errors in the systems are gate errors that could come from poor calibration, state preparation and measurement errors, cross talk errors and leakage errors that takes your qubit out of the computational basis to name a few. So quantum systems are faulty and to build a universal quantum computer to solve large and complex problems, we will need some type of quantum error correction. So let's start from reviewing how classical error correction works. In a classical system, each physical bit 0 and 1 can be encoded into many zeros and ones. In this example, seven copies. When there is some noise, some of the bits can flip from 0 to 1 and 1 to 0. In this example, the third and the sixth bits in red have flipped. But as long as less than half of the copies experience a bit flip, you can look at the bit string and simply do a majority vote to decode back into the correct state. Here, the code distance D roughly indicates the amount of noise tolerance. And as you see here, you can expect lower logical error PL as you increase the distance D with some error rate P, which is between 0 to 1. So that was classical error correction. Now how about quantum error correction? Can we do the same? Similar to classical error correction, quantum error correction relies on encoding, dealing with some noise and decoding. But key differences are that we can't copy an unknown quantum state as proved in no cloning theorem. But as you see in this example, rather than making a copy of psi, you can encode psi logical into a state like 0 0 0 plus 1 1 1 and use that as your logical qubit. Another difference is that when you measure a qubit, it can destroy entanglement and end the computation. So to get around this, we can use another uncillic qubit that we used to measure a parity of data qubits. So let's talk a bit more about parity measurement next. The circuit shown here in the left of the slide is a sample circuit that will provide a parity of the two qubits used using a third uncillic qubit. And the left plot is to illustrate what parity measurements are. And the right table is an example where we can use the parity to detect errors. Parity of the bit string could be even or odd. And we can find this by measuring the ZZ operator. ZZ parity measurements can have either plus 1 or minus 1 eigenvalues, which corresponds to even or odd parity. In this circuit, measuring 0 on the uncillic qubit corresponds to plus 1 eigenvalue or even state. And measuring 1 corresponds to minus 1 eigenvalue or odd state. If you follow the colors here, the white and the green inputs show 0 0 and 1 1, which have even parities. When you run this circuit and probe using the third uncillic qubit, what you measure is 0. That tells you that it has an even parity. The red and the blue, 1 0 and 0 1 states have odd parities, which are confirmed when running through this circuit that leads to uncillic qubit being 1, which corresponds to negative eigenvalue. Using the circuit by projecting parity onto the third uncillic qubit, it allows you to measure the parity of the input qubits without destroying the entanglement, if present. So this looks like a good way to detect bit flip errors. Qubits, unlike classical bits, can have phase flip errors along with bit flip errors. So let's consider an example where you can detect both types of errors. To the right is a toy model, where a bell state 0 0 plus 1 1 is our target state. By measuring both ZZ and XX operators, you can measure both types of errors, and you can simultaneously measure these two operators since they commute. The target state without any error will have plus 1 eigenvalue for both ZZ and XX operators, but if you have a bit flip error, an X error, it will have minus 1 eigenvalue for ZZ operator and plus 1 for XX operator. Now with the phase flip error Z error, it will be plus 1 for ZZ, but minus 1 for XX. And finally, if you have both bit and phase flip error, a Y error, that will lead to minus 1 eigenvalues for both ZZ and XX operators. My colleagues at IBM did an experimental demonstration on this parity checks back in 2015 using four superconducting qubits. And note that this example stabilizes a single state but does not protect a code space to protect a logical qubit. So let's look at an example of how we can encode a logical qubit and protect it. Surface code is a very popular code where you only need to have nearest neighboring coupling with relatively high threshold. So I will get back to the concept of threshold in a bit, but let's look at the layout first. This is a picture of D equals 5 surface code which can detect and correct any two errors and black qubits are called the data qubits. And blue squares denote that the four black data qubits on its corners are checked for their bit parity and the pink squares check for their phase parity. You detect errors by running these parity checks, both bit and phase parities, over and over and based on the outcome of the parity measurement, we can keep track of what errors happen on which qubit. So I won't go into details on how logical gates are implemented on this code, but we'll give you an example using a smaller distance to code later on. So I mentioned how one of the appeals of working with surface code is that it has a relatively high threshold. So what does that mean? Here I have a simulation plot of logical error at different distances. While you sweep the error of physical qubits, you use to encode a logical qubit. So let's focus on the two curves, the black where it has smallest number of qubits with smallest distance three and the magenta curve with more qubits with distance 13. So when the physical error is high above the threshold where it's shown in purple line, larger than the larger the distance with more qubits result in higher logical error. But below this purple line, at the low physical error, as you increase the distance, you can get lower and lower logical error. So you see the benefit in using more qubits to encode a logical qubit if you have low enough physical errors. So surface code is an attractive choice, especially for the superconducting qubits community due to these properties, which is the reason IBM had followed the path in building this lattice in the past. Although what we had found was that the surface code requiring four nearest neighbors to run the parity checks are difficult to make. So the main difficulties come from one frequency crowding that leads to collisions that make it harder or even impossible to tune up gates and two crosstalk because there are so many qubits coupled to each one. So IBM has been developing larger and larger superconducting qubit system in the recent years, but we have modified the connectivity so that it is in a heavy hexagon lattice. And in this lattice, there are at most three nearest neighbors on each qubits, and a lot of them, the black and the orange, qubits in this picture only have two nearest neighbors. This change in qubit layout increases the yield of fabricating a collision-free device. So now we don't have the nice square lattice that you can map surface code to, but the theorists from IBM came up with a new hybrid code called the heavy hexagon code that you can encode logical qubits similarly to how it was done in surface code. And the threshold for one type of error is around 0.45%, which is not much lower than the surface code. So now that you have a bit of background in quantum error correcting code, I'm going to start describing the details of experimental demonstrations that is now possible because of improvements in coherence, lower gate errors, frequency collision reduction due to the lattice design, and most importantly, improvement in measurement. This example code is called the 412 error detection code, where you encode one logical qubits into four physical qubits, which is a distance to code. Since the arbitrary error you can correct is d minus one over two, which is half, this is not an error correcting code, but an error detecting code. So in this code we have four data qubits and three uncillic qubits. Similar color scheme as before, pink square denotes that the four orange data qubits in this corner are checked for their fias parties using the pink uncillic qubits, and these two blue qubits are also used as flat qubits for phase parity checks. Blue rectangles check for the bit parity using the blue qubits again, this time as z parity uncillic qubits. This code can be mapped onto a section of IBM's 27 qubit Falcon processor and the parity checks or the stabilizers that we will measure our weight for x checks and two weight to z checks. Logical gates and states are defined here, and you can apply these logical gates to the state to the right. And in the surface code example, I mentioned how we repeat the parity checks over and over to keep detecting errors. To do this, we need to keep measuring and reusing the uncillic qubits as you run more and more parity checks. So uncillic qubits are initialized each time after the measurements, and this is done by applying a conditional reset using FPGA to determine the state of the qubit and apply a bit flip depending on the outcome of the measurement. This is a very difficult task when trying to run very fast in similar times as what your gates are running. The IBM team has designed a quantum device that allows us to measure and reset with high fidelity in less than 800 nanoseconds using state-of-the-art control electronics. So how about other errors? This device has sub-1% error for both two qubit and measurement reset errors. And combining all the gates, measurement, and reset times necessary to run these parity checks, the time it takes to run one z-check is around 1.7 microsecond, and x-checks takes about 2.9 microsecond. In this slide, you'll see an error plot against time or rounds, and rounds are defined in the circuit drawn in the right, where one pink region and one blue corresponds to one round of error detection. So earlier in the talk, I discussed several errors that can be present on a quantum system. Here, to compare how well we can preserve our encoded state, let's first consider error that arises from t1 relaxation time. A qubit, when prepared in one state, decays to zero state with average lifetime t1, and seven qubits we use for this experiment has t1 times ranging from 96 to 193 microsecond, which is shown in yellow region in the plot. And longer you try keeping the qubit in one state, it is more likely for the qubit to experience a bit flip error. Now, the distance to 412 code encodes one logical qubits into four physical qubits, and as you see in the circuit here, the first part is how we encode the qubit, and pink part of the circuit is how you detect phase errors, and finally, blue checks for bit flip errors. After initializing the state, you repeat this pink and blue block n times, that's the number of rounds you see in the x-axis to the left, and finally, measuring the data qubits to see if the state of your measurement at the end is the same logical state as what you have prepared. So, in this circuit, there are two types of error-sensitive events. One, when there is an error on flag qubits, and second, when the two subsequent measurements of the same stabilizer are different, and we post-select if there's any error event. Here in blue, I am plotting error after preparing a logical minus state and running the error detection circuit n times. As you see, encoding logical qubit error is much lower than the error from physical qubit T1, and when you zoom in and fit to a model, we can obtain lifetime tau, which is around 986 rounds, and this corresponds to about 4.5 millisecond when converting to unit of time, much, much longer than physical T1 times. Finally, what we need to note is that since this is an error detection code, we are throwing out data when we detect an error. As you see in this acceptance probability plot in red, as you run more and more rounds, there are less and less accepted results, throwing out more and more data. In our recent paper, Calibrated Decoders for Experimental Quantum Air Correction, which has been accepted to PRL, we evaluate a new decoding technique and consider partial post-selection that will allow us to keep more data than doing full post-selection as shown in the example here. But if you really want to have a scalable air correction code, you need at least distance 3, total of 23 qubits in a heavy hexagon lattice, to correct an arbitrary error. And that is the next experimental demonstration we did, which can also be implemented on a Falcon processor in our new paper, Matching and Maximum Likelihood Decoding of a Multi-Round Subsystem Quantum Air Correction Experiment. So here's the summary of what we talked about today. I just showed you decos to Quantum Air Detection demonstration on IBM Falcon processor, where we saw a benefit of logical encoding. Using the same device, we can map a decos 3 air correction code, which I didn't discuss today, but we have demonstrated in our new paper. I hope you have a better idea of why we are continuing to build larger devices in a heavy hexagon lattice, which provides more practical layout with higher yield of collision-free device. And jumping to the very bottom of the slide, you can see IBM's current progress on the hardware roadmap. We are on track to our target systems. And note that as the number of qubits increased in each generation, you can map a larger distance code. So as we increase the number of qubits, we need to work on lowering physical air rates, bringing them to well below the threshold. So as we encode a logical qubit into a larger number of qubits, we can achieve negligible logical air rates that we can use to solving more complex problems. Thank you for your attention. Micah, thank you. I have questions for you that run the gamut from very detailed to big picture. So get ready to start out with, when you were talking about the change in hardware structure, you mentioned that the newer structure results in fewer collisions. Could you talk with us about frequency collisions and what that means? Yeah. So at IBM, we've been working with fixed frequency transplant qubits. So that means that we have a fixed frequency qubits, that we don't modify the frequencies of the qubits. And we always have always on coupling between the neighboring qubits. So say the two qubits are coupled, when the both qubits have the same frequency, you know, if that could be a problem. So say in controlling one of them, you know, it could actually couple to the second qubit. And also in transplant qubits, there's not only this zero one state that we use for qubits state, but there are also higher transition that you have to worry about. So those are, there are a lot of not just a zero one frequency, but there are a lot of other frequency to the specific qubit that you want to avoid having coupled to. So by changing this, the lattice, you know, from the square lattice to heavy hexagon lattice, each qubit is coupled to much to less qubits. So by having less qubits neighboring them, you have less chance of having a collision. So that actually leads to having better yield in fabricating a good collision-free device. So it's not that qubits are colliding less frequently, rather that the frequency collisions are happening less often. Right. So it's from the beginning. Once you make a device, the qubit frequency that we don't change it. So it's a defined parameter. So, you know, once you make the device, if there's a collision, you can't really use that. So then you have to try making a new device with different frequencies, and you want to make it so that there are no collisions. Got it. Thank you for the clarification. You also mentioned that the newer Falcon devices have achieved higher coherences. What relationship does that have to the error correction demos that you're working on? Yeah. The great question. So with higher coherences, you know, it also relates to lowering physical qubit error rates. So I mentioned about how with the quantum error correction demonstration, we want to have lower and lower physical errors. So by having a longer coherence, you could achieve lower error rates. So if you know, and also there are times that the qubits are just going to be idling. And just by having that longer coherence, you know, you don't have to have the gates as fast as you can. You know, just, you know, if the same, if you have the gates with the same length, if you have a twice as long coherence, you know, the errors that's going to accumulate during that same period of time is going to be less. So that is going to directly translate into how the qubit, the logical qubit performance will be. So the coherence is another really important factor. Yes. Got it. So when we think about what's in the wings, what are the next steps towards improving logical qubit performance? Yeah. So, you know, again, the logical, the physical error rate that has to go down, we also want to improve the measurement capabilities. Right now, the measurements are pretty long. And also in our systems in superconducting qubits, as I mentioned about higher, higher states, that's not just a zero one state in just one qubits, but you do have some leaked states that could be there. So we want those errors to go away. So we need to reduce a lot of errors, not just the gate errors, but also measurement errors and those leakage errors. But also, you know, as you grow to larger lattice, you know, you could try to implement these new logical qubits, but also we do want to think not just this, you know, just in large, just making larger and larger devices, but we also want to come up with a way to have a more efficient error correction. So in the theory aside, there's a lot of progress that's been made that's not just in the planar code. So we also want to start considering how this could be done in a hardware. Got it. So let's jump it up. It sounds like error correction is something that we all take for granted in classical computing, but it's a reality and an important problem to be solved. Would you say that quantum error correction is critical to the future of quantum computing? Yeah, absolutely. So quantum error correction by having that ability to correct for some errors, qubits are, quantum systems are always going to be noisy. So we need some way to correct for those errors, especially if you want something accurate. Also, if you want, in theory, if there are, if you have a full-point and quantum computer, say, then that definitely could give you an advantage, say an exponential advantage. That doesn't necessarily mean that, you know, without error correction, you can't do anything, right? But, you know, with the proof that we have now within theory, you do need error correction for these really complex problems. But even without, there are a lot of things that you might be able to try in the near term, where, you know, we now have a device that is over 100 qubits, 127 qubit ego device, was just deployed last year. And it's very exciting. It's something that, you know, we can't simulate anymore in the, you know, we can't simulate efficiently in a classical computer. So, you know, with that device, with, you know, if you could have a slightly lower errors, and then maybe apply some error mitigation method, even without quantum error correction, we might be able to see some advantage, and that's a very exciting time. It sounds like, and it sounds like error correction is an important piece of the puzzle. Yes, fantastic. Well, Micah, this has been extremely informative and so interesting to learn about. Thank you so much for taking the time to share your work with us today. And I want to go ahead and say thank you to our audience as well. We'll be back next month with an AI themed session on using grammar rules and graphs for molecular generation. So, stay tuned. See you soon.