 Hi everyone. I'm going to from IBM research. Today I'm going to talk about a collaborative work with my colleagues at IBM on topological and subsistence codes on low degree graphs with flag qubits. So the result was recently published on PIX. So the motivation of this work was partially discussed by Jerry's talk, which I also recorded. So here we consider some superconducting qubit architectures. For example, the one-use cross resonance gate to increment two qubit gates. So in this kind of architecture, neighboring qubit must have different frequencies. However, due to the fabrication process, there are always some imperfection in the frequency assignments, which will result in frequency collisions. So the fewer frequencies that must be chosen, that will lead to a higher success rate during the fabrication process. Also, in general, the lower the cross talk errors there will be. So this motivates designing topological and subsistence codes on low degree graphs, which will have lower frequency collisions. So just to briefly introduce the idea. So here we have qubits put down the vertices of a graph. And for this middle qubit has a degree four connection. To reduce the degree, we can split the single vertex into two. So they will both become degree three. We can further put another vertex in the middle, which will have only degree two connection. So this motivates the building of the IBM 53 qubit Rochester device, which is put on a so-called heavy hexagon lattice, where the qubits are put on both the vertex and the edge of the hexgons. So we have designed a so-called heavy hex code, which perfectly fits on this type of device, where you put both data and serial qubits on this lattice. So we deform the shape of lattice into a square shape just for convenience. And here the yellow ones are the data qubits, and the black and white ones are the and serial qubits. Also we show the C0 gates applied on this code, and also the scheduling of these gates. So just to introduce the coding more details, the heavy hex code is a substitution code. So it has gauge operators. In the bulk, there are basically two types. One is a four-body X gauge operator, the other two-body Z gauge operator. So to measure the four-body X gauge operator, we use three and serial qubits in the middle. The black one is the usual measurement syndrome qubit, which will read out the value of the gauge operator. So the white ones we call flat qubits, they have two uses. One is to mediate the entanglement between the data and the measurement qubits. The other is that it can also significantly reduce the logical error rates as we will discuss later. So we can see a single poly X will be propagated by the circuit to four X on the data qubits, such that the entangle of the data qubits with the measurement qubit, and then through measuring the value of the measurement qubit, you can read out the value of the gauge operator. So similarly, there is a circuit to measure the two-body Z gauge operator. So they have to be done sequentially, such that the whole measurement circuit costs 11 time steps. And so, since it's a subset in quite also has stabilizers, so there are two types in the bulk. One is a two-column strip of X operators, which is basically the product of the four-body X gauge operators. So they essentially the same as the stabilizer is in the Bacon shore code. So we give it a nickname as Bacon strip. So the other type is a four-body Z stabilizer, which is basically the product of two-body Z gauge operators. And also there are boundary two-body Z stabilizers. So the Z part is like a surface code. So essentially, this heavy hex code is a hybrid of surface and Bacon shore code. So we also introduce another type of code called the heavy square surface code, where both the data and CLA qubits are put on the heavy square lattice, where there are qubits on both the vertices and edges of a square lattice. So it has a hybridization of a degree 4 and a degree 2. So it has lower degree standard surface code. And the stabilizer is the same, and the measurement circuit is same as the four-body gauge operator in the heavy hex code. And then you have the same similar for the X and Z parts. And the whole depth of the circuit is 14 time steps as opposed to the six time steps in the standard surface code. So now we can see that these two types of codes indeed have a significant reduction of the frequency collisions. So we can see that in order to have neighboring qubits have different frequencies, for these two types of codes, we only need three types of frequencies indicated by the colors. In other words, the two graphs are three colorable. So in contrast to implement the standard surface code, you have to have five different frequencies. So we can also slightly modify the vertices on boundary by merging three of them. The price is that you have to introduce the four different frequencies. So then we can do a numerical simulation of the main number of frequency collisions for different types of codes. So here the black are the surface code, and the red are the heavy square code, and the blue and purple are the two versions of the heavy hex code. So we can see that for the same code distance, the surface code has significant large frequency collisions than the heavy square and the heavy hex code. So by the way, this is plotting logarithmic scale. So now we talk about the extra use of black qubits. So we know when there's a single fault in the measurement circuit occurred on the ancillary qubit, it can be propagated by the circuit into a way to reduce errors on the data qubits. So this is not good because it could potentially reduce the code distance, effective code distance. So to resolve this problem, we have this flag qubit. So once this event happens, the flag qubit measurement will be triggered by the errors, the error occurred on the flag qubit. So from that information, you know this event has happened. So similarly, for a different error location, the other flag qubit could be triggered. So now let's talk about the decoding graph of this type of code. So for the z type of syndromes, we have a decoding graph. The main part is similar to a surface code. So here the vertices represent the measurement qubit, which give you the syndromes of the errors. So in addition, you also have these green circles, which represent the flag qubits. And the solid edge that usually standard edge, the same as the surface code, but one could have extra cross edges related to the events that a single fault could essentially trigger two data qubit errors. So this edge corresponds to these events. So once a flag qubit is triggered, so either on the right or the left, it means there could be either a single fault leads to two errors, or it could also happen that it only leads to a single data qubit error. So these two edges are highlighted when flag qubits are triggered. So due to the shape, we call this edge in a region, boomer range region, just due to the shape. So now while we have the decoding graph, so in general the measurement syndromes could be highlighted. Also the flag qubits could be highlighted. And in the case of when there's measurement error and circuit level noise, one has to study a 3D version of a decoding graph where extra 3D edge has to be introduced. So the decoding protocol of while using these flag qubits pretty straightforward, basically apply the standard minimum weight perfect matching on the decoding graph with a changes that the edge has to be renormalized, conditioned by the information from the flag qubits. So when their total number of m flags are triggered, so in these placates, when there are m of them, so we will have to renormalize the edge weight. So while you have edge inside boomer range region, so we choose the weight to be minus log p, where p is at the order of a single error event, a single fault. For those edge outside boomer range region, we have to renormalize them in the way that we for the probability with times multiplied by p to the m, where m the total number of flags. So in this way, you will have to lower the probability and increase the weight of the edge outside boomer range. So in that case, the minimum weight perfect matching algorithm will figure out a path that prefer to go through the boomer range that taking into account of the fault triggered by that. So in the paper, we prove that once you do that, you can actually preserve the full code distance of the code. So here is the numerical simulation of the two types of code heavy hex and heavy square codes for both the x error and the z error. So the heavy hex, the x error part is just like the surface code. So it has a threshold, which is more than half of the standard service code under the same depolarization noise model. So which is pretty good. On the z error part, because it's bacon show nature, there's no error threshold, but the logical error rate is at the same order of the x error rate. So in the midterm, when the code distance is much below 20, d equals 20, so it behaves pretty good. So then for heavy square code, we know that the measurement circuit has much longer depth. It's 14 time steps as opposed to six time steps in the standard service code. However, due to the use of the flag information and the flag decoder scheme, we get a error threshold slightly just below the half of the standard service code, which also suggests that this type of decoder is pretty powerful. Now we go to the summary of this talk that we have found new codes defined on the heavy hex and heavy square lattice, which can significantly reduce the frequency collision and the cross talks. And the heavy hex code is hybridization of the surface and bacon show code. And the introduction of flag qubits can preserve the full code distance and significantly reduce the logical error rate. Also the logical error rate and error threshold of both type of codes are quite competitive with the standard service code, despite much longer depths of the measurement circuits. And obviously the advantage is in terms of the hardware implantation that they can have a significant low cross talk error and the frequency collision. Okay, so that's my talk. Thanks for your attention.