 We'll mess them with the mic, and we're ready to go. Yeah? All right, welcome back for lecture three, and we have Ben Krueger again. Nice. All right, so, is it enough to have a code that can just detect and correct palliars? It is, and I will not prove this, because the proof will be too long, and it would have required too much studying for me to prepare. Man, when you're a student, they make you study, but now they're paying me. They can't make me do anything. So I'll tease you a little bit with one thing that you can correct with a code that just corrects palliars, it's not a palli. And that is coherent rotation by a palli that is it self-correctable, right? So maybe some of you know already, probably most of you know, that like, I can have, you know, U P theta equal to some exponential of I times a palli times theta, some angle, right? It's going to be some unitary operator, and you can do a bunch of Taylor expansion and knowing that the square of a palli is the identity to get that this is cos theta times the identity plus I sine theta times P. So it's just some linear combination with complex coefficients of operators, you know, one of them is a palli. What would happen if we applied this unitary to a stabilizer state that was stabilized by some stabilizer S, and then we measure S and we assume that S anti-commutes with a palli? Because spoiler alert, if S commutes with a palli, nothing happens, right? So let's set up S times cosine of theta identity plus I sine theta P acting on psi, do we even need this thing? Ah, no, sorry. We're going to project into one of these two spaces, right, either the plus one or minus one projector, right? So we expand out, this is going to be equal to cos theta I plus S, I plus or minus S over two times psi plus I, yeah, P sine theta I minus or plus, right? Because these anti-commute over two, psi. Right. Plus one outcome, then I plus S over two on psi gives us psi, right? Because S psi and I psi are both equal to psi, right? And if you have a plus one outcome, then, you know, I minus S acting on psi cancels out completely, but if you have the minus one outcome, then this cancels out completely and the resulting term, well, okay, so let me write this down, plus one, this will cancel out and you'll get psi. Once you normalize, right, the square root of Pj, et cetera, and then minus one, you'll get some term sine squared theta P acting on psi. So whenever you measure a stabilizer that anti-commutes, even though there was some coherent rotation, right, the error is not just some palli that you apply. The projective measurement collapses you into a space that's spanned by vectors that are palli's acting on the original state and you can just continue with your regular error correction formalism from there. The full proof that, like, you can correct arbitrary dynamics is more involved and it even involves non-stabilizer codes and if you want to know more about that, you should look at chapter two of Bruin and Lidar, the book they edited, yeah, they, well, okay, I would say they wrote the book on quantum error correction, they didn't write the book, they edited the book, but it is also called quantum error correction, right, and there's like a teaser for the more general applicability of these stabilized codes. Okay, so all this is to say that it is enough to correct palli's. The remaining question should then be, you know, how many palli's can you correct with a certain number of qubits? Oh, yeah. So Kiran showed in the last lecture that we can do with nine qubits. It says nince qubits in this PDF, but that's a typo. Can we do better? The one way to answer this question is by imagining that each possible syndrome, right, every list of plus or minus ones or zeros and ones that you could get out of measuring your stabilizers corresponds to some unique palli error. Let's imagine, well, okay, let's imagine a code with k equals one. Did you cover what nk and d are? Okay, nk and d, I'm going to slip into saying nk and d, so I might as well tell you what there are right now. So n is the number of physical qubits involved in the code. Oh, yeah, it was something I was supposed to introduce in my last lecture. K is the number of logical qubits, right, so you want n low and k high, and d is the distance, which is just the weight of a logical operator, the minimum weight, right? It's the number of things that have to go wrong before random dynamics have flipped your qubit completely and you have no chance of detecting it. So you want high d, high k, low n. For a code with k equals one, right, so if you have an n1d code, then there are n minus one stabilizers, you have, right, and the number of things that can go wrong, the number of Pauly operators, is three times n, right, x, y, and z for each n, plus one for the identity, right? The identity, you know, if nothing goes wrong, that has to give you a unique signal. That's not the same as some error. So, you know, you have to be able to successfully get that nothing happened. Oh, the number of bit strings that we can measure is two to the n minus one, right? So the number of syndromes has to be greater than or equal to the number of errors that can occur, which is three n plus one. Whenever we have a discrete equation like this, I'm not going to try and do algebra. I'm just going to start plugging in values, right? Well, okay, it doesn't make sense to have n equals zero. What about n equals one? You know, one is not greater than four. What about two? Two is not greater than whatever this is. If you keep going, you'll note that the smallest value for which this equation can be satisfied, this inequality can be satisfied, is five. n has to be five or more. And as it turns out, there is a five, one, three code that, you know, that just barely makes these equations. It's got 16 possible stabilizer syndromes, and there are 16 things that can go wrong, because three times five plus one is equal to two to the four. And here are the stabilizers. X, Z, Z, X, I, S, five, and its logical operators are. Well, there are three, there are weight three logical operators, but they look ugly. So I'm going to write some symmetric looking, but not minimum weight logicals. People do that all the time. Fun historical fact, this code on five qubits and the short code and the steam code which we're about to see, we're all derived first using ket notation before the stabilizer formalism was invented. So there really were people out there messing with nine qubit cats. Luckily, Daniel Goddusman saved us from all this, and we don't have to do that anymore. Now we can add all kinds of overhead and still say that we know what we're talking about. Again, double-edged swords. Yeah, and we mentioned that the number of correctable errors is roughly half the distance, right? If in this distance three code we were to have one error, we could assign a unique syndrome to it, but if you multiply that error by a logical, you may get a weight two error that has the exact same syndrome. So you say it's less likely, but less likely it's not impossible. So there's still gonna be some finite probability of failure, no matter what code distance you use. The hope is that we decrease the failure probability exponentially in the amount of overhead. All right, so that's one code. Kiran has used it. Did we do anything with a short code? Not yet, but the rest of the talk is basically gonna be about what kind of codes you can construct and how easily. So, should I talk about classical coding theory now? No, first we're gonna do a code that comes from some classical coding theory that I don't show you yet. The single parity check code, a.k.a. the iceberg code. So, single check and or iceberg, right? So, it's got n minus two logical qubits, right? Which is a very large number of logical qubits. That's like the maximum you can have, but it's only distance two. And you'll see why. It's stabilizers r, x. I'm gonna use the O times in the exponent for power. So this is like x, x, x, x, x on everything. And then z on everything, right? So it's two stabilizers. And we can see the four qubit code that I introduced in the last session was one of them. And if an x, y, or z happens on any qubit, one of these is gonna anti-commute with it at least. But the syndrome is the exact same, regardless of where that error happens. So, you can tell that something has gone wrong but not where. And that's why it's distance two, right? So, there's one construction. These codes get used all the time, and they get used to construct bigger and better codes. So, it behooves us to know about them. Okay, now we're gonna get into what Kieran was talking about, concatenation. Concatenation has gotta be my favorite code construction. And it generalizes, it does all kinds of stuff. There are rules that once you learn them, you can break them. Concatenation is lovely. So, what we saw Kieran do was take the encoding Clifford for a single code. Here it's got one input wire, right? And what comes out is some three qubit state. And then, we take all these out here. We encode each of those qubits into some new code. And with this nice tree-like diagram, we can start to synthesize bigger and better codes that way. When we do this, okay, let's imagine doing this to two arbitrary codes. Where's that cloth? Here we go. So, concatenation is gonna be used to take a low-level code which has n, l, k, l, and d, l, right, for a low-level. And a high-level code with predictably, you know, n, h, k, h, and d, h. And you will get a code out with the product, right? n, h, and l, because for every high-level qubit, you need a low-level block. k, h, k, l, let's see, because for every physical qubit of the high-level code, you need an entire block of the low-level code. So, if your low-level code encodes like six qubits and you want to encode into a 513 code, then it's not enough to take five qubits out of the six qubit block and put them into the encoding circuit, because your probabilities will get messed up and it will not work. We can get into it later in the Q&A, if you like. What you have to do is get six blocks of the 513 code that you want to use, like this guy, and then take one logical qubit out of each of your six logical qubit blocks and feed those in independently. And that's basically to ensure that in order for something to go wrong at the high level, it has to go wrong on separate blocks at the low level. Actually, OK, we can see people are getting confused. So I'll give you the nucleus of this. So you have x1, x bar 1, is equal to xi, xi. X bar 2 here is equal to, let's see, let me go xxi and xixi. If I was to multiply these operators, I would get ixxi. So if I want to flip one of the logical qubits, then I need to flip two qubits. If I want to flip the other logical, I need to flip two qubits. If I want to flip them both, I need to flip only two qubits. I don't have to flip four. So things go wrong at the logical level with different relative probabilities than they go wrong at the physical level. And so if you take two qubits out of this block into a candidate of construction, then you can get a weight two error here with the same probability as a weight one error. Your error models become correlated, and the assumptions that you make start to be invalid. And so the code will not correct as many errors as you might like. And that's why you have to have one full code block, at least in this naive construction, for every physical qubit of the top level code. And of course, those physical qubits are replaced by KH logical qubits in every block. So you wind up with KHKL. The benefit of putting up with all that rigmarole is that you get DH times DL. So when you want to construct higher distance codes, something with distance bigger than two or three, you can start taking products. Now, I wonder if I should draw the circuits. I will leave Figure 1 for people who are looking ahead in the notes here as soon as I post them. But I will write out the stabilizer of the 4.22 code concatenated with itself. So if I take this 4.22 and concatenated with itself, I wind up with a code that was in a paper in 2022, I think. The Prabhu and Reinhardt distance 4 codes paper. So this is now state of the art. It may seem confusing, but you're now doing legitimate up-to-date research. All right. Stabilizer of this code. Let me start at the top for some reason. The x, x, x, x, z, z, z, z. You can see that even though it's not exponential, a quadratic amount of work can still be a lot. Also, a lot of your Pauli's wind up being sparse, affecting relatively few qubits. And so both for your own by hand calculations and for computers, it's sometimes best to just have x's and z's with indices on them to say where it's not equal to the identity rather than having to write out all the i's in the tableau yourself. This is a demonstration of that. OK. So now there are eight logical operators to worry about. Oh, actually, sorry. We're not even done with stabilizers yet, because this is just four independent blocks of the 4.22 code. You can see that because these columns have identities below them, identities in the rest here, this is completely unantangled blocks. So this is distance 2. If I apply two errors here, I flip one of these qubits. And I don't have four qubits yet. I have eight. So four logicals, I have eight logical qubits right now. And what I have to do in order to finish the construction of the concatenated code is write the stabilizers of the top level code in terms of the logical Pauli's of the bottom level code. OK. So this is like you can do whatever you want with these logical Pauli's. They act just like the physical Pauli's. One thing you can do is take tensor products of them and make stabilizers. OK. So luckily, I have prepared this earlier. So here's IZZ, logical operators. You should go to pecos.io. But I am still going to erase the thing. I can sense the boredom. X4, IX, IX, all right. Have I done all of this for absolutely no reason? No. This is all to demonstrate that even though we've gotten fully up to date with state-of-the-art techniques in the first lecture constructing these tableaus, it gets cumbersome pretty quickly. You want to get a computer to do it if you can. This is a hackathon. You can maybe try that in a hackathon if you want to. Also, this is why you often see quantum error correcting people work on little esoteric diagrams. So I'm going to switch. I mean, we switched from ket notation to stabilizer notation. Now we're going to switch again. But we're going to save a polynomial amount of work this time, still worth it when you're working on a blackboard because of quadratic speedup. For slow operations like the ones I do with my clumsy human brain and hands, it makes sense to do quadratic speedup. We can draw individual blocks of this 4-2-2 code. Let me put qubits on these little circles. Now I have 16 of them in total. And if you stick them on a square, then they divide naturally into four groups of four. I can also draw. Well, OK, so I'll say that there's an SX and an SZ hosted on each of these squares. And you can see, obviously, that it's four qubits with two stabilizers. Therefore, it's got two logicals. You can even interpret the logicals geometrically. You can put an X bar here. And you know that the anti-commuting Z bar has to go over here. Then you can stick the other X bar up at the top. Or you can multiply it by the stabilizer so that it overlaps on both qubits. But because it overlaps on both qubits, it commutes. And you can stick a Z bar over here. You can begin to say, oh, I have these face-like stabilizers. And my logicals are on the edges. And you can start reasoning geometrically. Like, for example, when I want to produce a weight four stabilizer on these tiles out of logical operators, I can use X bar, X bar, X bar, and X bar. So I can have vertical Xs and create this sort of octagon in the middle. And I can have Z bar, Z bar, Z bar, Z bar and create a little tile that crosses it. And because these things overlap on four qubits, they also commute. And I can also do, because there's an X that I can bring down here, X on this big octagonal tile and a Z on that big octagonal tile. And that completes the stabilizer group, which is a lot easier than writing out a bunch of identities, which are, and I have to thought anyway. The logicals you can also visualize. Logicals here, X bar 1 and Z bar 2 are going to live right here. And overlapping on one qubit is going to be X bar 2 and Z bar 1. So Z bar 1 and Z bar 2 commute because where they overlap is both Zs. But the other ones intersect on points. And I will also have logicals going down the sides of this lattice. Z bar 3 and X bar 4 and X bar 3, Z bar 4. Much more succinct, much more compact notation. You can see everything is length four. If you wanted to, you could check out what, this is something that we're in the middle of at the office at the moment. If you permute these qubits, there are some permutations that preserve the stabilizer group. Like if I were to pick this entire lattice up off the board, flip it and put it back down, you wouldn't be able to tell. The stabilizers would look the exact same. But some of my logical operators would move around. This thing would move down here. This would move up to the top. And you've got to wonder whether there would be any non-trivial logical gates that you could do just by permuting the qubits, as you can do with really high fidelity in an ion trap. There are more pros and cons to the calculation. Oh, OK. One of the pros is that if you're in the scenario where you measure stabilizers perfectly and you learn some syndrome, then there's a decoder where you basically decode the low level code first by brute force adding up every probability of every Pauly and assigning them into groups. And then you proceed up to the next level and you basically iteratively decode this gigantic code without ever having to consider all of the probabilities of all of the six qubit Paulys. It sort of divides the decoding labor for you. And you can get an optimal statistical decoder that gives you soft decisions. There's a paper from 2006. I think one of our projects, I'm in charge of the projects, one of our projects is replicating the results of that paper. So I mean, not state of the art research, it's 2006, but you know, what you can do in a week. OK, cons of concatenation. Why don't people use concatenated codes all the time? One is that they're sort of granular, they're not very fine-grainable. You couldn't create a distance 5 concatenated code if you wanted to, at least not with this construction. Because 5 is prime. There is no product of two numbers that will get you to 5. You also couldn't do 3. And sometimes you do want to use a distance 3 code because you don't have that many ions in your trap. So you can't afford lots of overhead and you want to see how well you can do at low distance, which is still pretty well. Also, as you add more layers, your distance grows exponentially, but so too does the number of qubits. And so it's not very long before you're dealing with tables that you can't write down, diagrams that you can't put on the board, and things get really cumbersome. And so we're going to go to even more formulaic code constructions. So instead of beginning with, all right, so this is, we're going to the next section now. And I will do one of these. So CSS, the CSS construction for quantum codes. This is another one of the pre-stabilizer constructions that I'm going to do in the stabilizer formalism just because it's easier. But if you read original work from the 90s, like the guy takes a classical error correcting code and puts a cat on it, it's like nuts. Speaking of classical codes, I now need to do classical coding theory in 10 minutes. There's a lot to classical coding theory. We're going to do a very small but very popular subset, which is linear codes over F2. OK, so CSS, but first classical coding theory. Classical code typically has a parity check matrix called H and a generator matrix called G generator, such that, let's see, in this lecture, H times G of t will be equal to 0. The rows of G will be in the kernel of H. And so, OK, and these are all going to be bit strings. So if we were to do the repetition code, it would be like 1, 1, 0, 0, 1, 1. And then if I took the vector 0, 0, 0, obviously that would be in the kernel because you multiply everything by 0. But more interestingly, and I should mention the all 0 string is always a code word of every classical linear code, if I took 1, 1, 0, I would have 1 plus 1, which mod 2 because we're in F2 is 0. And 1 plus 1 is again 0. So this vector is also in the kernel. And those are the code words. So this would be H. And then my G, OK, G would be something like 1, 1, 1, where I think you're allowed to leave the all 0 string implied. At least that's what I've done in the rest of these notes. There are a lot of more efficient constructions for distance 3 classical codes. I will, well, OK, those in the know already know which one that I'm going to pick. So 7, 4, 3 Hamming code. And this, OK, there's a clever trick being used here. If I want to know where an error is, and I'm going to multiply this parity check matrix onto the error string, well, I should mention that if I take some error and I add some code word, and then I multiply by the parity check matrix. So these are each vectors. This is a matrix. It's all over bits. Well, OK, this is going to be HE plus HC. H times the code word. Code words are in the kernel of the matrix. So here I'm directly getting the syndrome of the error, no matter what code word I was trying to transmit, which is a property that will turn out to be very convenient later. So let's imagine that I want HE to be distinct on every weight 1 E that I could put in. Well, OK, so if I take some matrix and I multiply by 0, 1, and 0, then wherever this is, I'm just going to get out column J. So if I want to make a distance 3 code classically, I can just write down a bunch of distinct columns. And I mean, you'll note here are a bunch of distinct columns in the repetition code. Let me try the same thing, but with three rows instead of two rows. So 0, 0, 0 doesn't detect anything. So I'll start at 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1. And there we have 743, which is another NKD tuple, but there's no extra line here that denotes it's classical code. And this is from Richard Hamming. Great. Why am I using this code? Because it has an obscure property also. Let me bring this in accordance with the notes by flipping the matrix front to back. H Hamming, G of the Hamming code. So it's got four orthogonal non-zero code words, which means you can get a total of 16 messages in there. And you can determine, given any seven-bit noisy string, whether it's equivalent to one of these up to a weight or an error just by multiplying on this matrix. Cool. But these are classical codes. How do we promote them into quantum codes? Kiran showed us how to do it with repetition codes. And it turns out that that generalizes to any classical code. We're going to start with these parity checks. And when you multiply them on with some error, there's a question of where does the error overlap with the parity check? And if you start multiplying Pauli's together, you're going to notice that whether it commutes or anti-commutes reduces to the same question. Where does the error overlap with the Pauli that anti-commutes with it? Let's transform the ones in the Hamming check matrix and the Hamming parity check matrix into zeds and the zeros into identities. Now we can detect x errors that occur in some arbitrary location, for example, iixiii, right? You're going to get a syndrome because it anti-commutes with this check and this check, but not this check because there's an identity here, right? You're going to get a syndrome that's equal to the classical syndrome of putting a classical bit flip on this column here. You could do the same thing in the x basis. These stabilizers commute. Is it guaranteed that they commute? If I was just to take some random matrix of zeros and ones, turn it both into zeds and x's and then stick them on top of each other with the stabilizers commute necessarily. Sleeping guy in the fourth row. Do you know? Sleeping guy does not know. But the answer is no. In order to get these stabilizers to commute, the code has to be what's called dual containing. You'll notice that the parity checks of this code also appear as code words, right? These vectors are not only in the parity check matrix themselves, but they are in the kernel of that parity check matrix. Dual containing is a very obscure property of classical codes, but we can see that it's related to making the stabilizers commute. There are lots of papers where people construct CSS codes, right, this, by the way, is how you construct a CSS code. You take two classical codes that are related by each one being in the dual of the other, and then you translate one of them to z stabilizers, translate the other one to x stabilizers, stick them on top of each other. There are plenty of papers where you construct CSS codes, and they always do some weird, obscure stuff that you can never tell why it's happening. Normally, why it's happening is in order to make the stabilizers commute. And we're going to see some weird, obscure stuff. So first off, let's see what happens when we complete this code. So here's one vector, one code word of this code that does not appear in the parity check matrix. Sure enough, it becomes a logical, right? So your logical x bar or your z bar, right? These become x, x, x, x, x, x, x, and z, z, z. This was not as much of a mess as this thing. And these are also not minimum weight, but they're very symmetric. You can see that if I multiply these two together, I would wind up with some weight 3x, right? And likewise, I have some weight 3z that I can cook up. And they always end to commute, right? And this winds up being a 713 code, right? So you're not using as many qubits as the short code. That's nice. And it gives you access to, well, I mean, it gives you sort of difficult access to a decades-long literature of classical coding theory. There are lots of tricks you can do. There are like product constructions, how to make the stabilizers commute. We're only going to do one trick, because it's the one I understand as to how to make the stabilizers commute. And that is homological and topological codes. Any time you're using a lot of words that end in logical, that's how you know you're doing real science. What is the difference between homology and topology? I don't know. All I know is you get good codes. Actually, according to the orthodox definition of what good codes are, these are not even good codes, unless you use hyperbolic space. But we're going to do all Euclidean. Let's consider an arbitrary graph. I don't mean a graph like a plot of a function. I mean like a diagram you would make of a network, right? So it's got vertices. It's got edges. Here's E1. There's another one. It's called E2. There's some other edges coming out from every vertex. And we can consider these E1 and E2 to be part of a cycle. There are some other edges around here that complete the cycle. The intersection between the neighborhood of this vertex and the set of edges in this cycle is always 2. Anytime you're on a cycle and you stop at one of the corners of the cycle, it has an incoming edge and an outgoing edge. So if I were to put qubits on these edges and I were to put x stabilizers on these vertices and z stabilizers on some of the cycles, oh boy, they would commute. Pretty weird, but we will do any weird thing in order to make the stabilizers commute. All right. Perfect. What is the n of this code? We don't know. What is the k? No clue. What's the distance? Oh boy. But I've made the stabilizers commute. You have to add a little bit more mathematical sanity in order to be able to say what the n, k, and d of such a code are. And here's where I run out of material early and we go into questions. But I'm sure there are going to be questions. Oh, let's erase this garbage. We're going to do another code that has just as many qubits, but we will not have nearly as bad a time writing out stabilizers because we're using topology. Let us imagine that the graph we select is going to be a square tiling of a torus. There's a bunch of algebra that I should put here, but this diagram more or less captures everything. The torus code is a special type of homological code. Homological codes are a special type of CSS codes. So you can see that we're drilling down from the general to the specific. And the torus code is more or less state of the art. There are lots of groups working on the torus. Well, OK. On relatives of the torus code that you can cut and stretch out onto one sheet so you can put everything on a single chip. But the torus code is close enough for government work. Here's how you draw a torus. That's, you know, if you go to like your first post doc, you learn how to draw torus. Now, how do we tile it? And then there's going to be extra, like, OK. That's definitely close enough. Our vertices are going to be here. So we're going to wind up with X stabilizers that are wait for on these edges, right? X, X, X, X. And we're going to wind up with, where have I got a nice square? Here's a nice square. So sometimes you see these called plaquettes or faces or tiles. I will have Z, Z, Z, Z. And sure enough, wherever one of these squares intersects one of the vertices, it always hits two edges. Now, let's see. I think the homological topological part of it is where you say, not all cycles in my graph are going to be Z stabilizers. Only the topologically trivial cycles, right, that don't loop around the torus, right? You can imagine if you were trying to pick up a kettle bell and you tried to, like, poem it, it wouldn't work. But if you grab it by the handle, that works, yeah? So if you grab around this, let me do a dotted line through here, right? So loops around here are going to be logical operators. And there are partner loops that I can't draw in 3D, but I do know how to draw in 2D. So I'm going to do one more diagram, and then we're going to call it a day. Well, up to questions. Let's draw another torus. This is actually my favorite way to draw a torus. So again, we will have X stabilizers. By the way, this is a Pac-Man style torus. If Pac-Man walked off this edge here, it would come back on here. And if it went over here, it would come back on over here. That's what makes it a torus, right? You can imagine these different cut edges are wrapped around so they fuse together. I again have Z stabilizers here. And you can read about this in chapter 19 of LaDar and Brun, just because this explanation is, by necessity, extremely rushed. Let's find our logical operators. So OK, I can't resist doing cool stuff. If I put one X error here, this square stabilizer would get a minus 1 on it, and so would this one. Now if I put another X here, that cancels this out, because now this has got weight, too. But Pac-Man is walking back over. If I put another X here, the thing moves again. And if I put another X here, this moves over here and cancels with this. So my logical operator is given by sort of like walking between sites of the torus, right? And I have this length for topologically non-trivial loop that goes all the way around one of the hands. Likewise, if I put Z's, it activates this vertex unless I put another Z, Z, Z. That wraps all the way around. And I can do the same thing on horizontal edges, Z, Z, Z, Z. And X, X, X, X to wind up with a, in this case, 1624 torus code. But in general, N squared, or sorry, let me go D squared to D. Because you can stretch the dimensions of the torus, just tie the bigger lattice, and get whatever distance you want, 3, 4, 5, 6, 7, 8, 9, very granular. And the stabilizers are always low weight, so they're easy to measure with circuits. And that's what makes this code a nice high threshold option for near-term memory experiments and state-of-the-art stuff like that. So given that, I will end a little bit early. And we can take 10 minutes for questions or whatever other topics you are interested in in quantum error correction. The crowd is bummed now. I know. These people are ready for dinner. Oh, my. My god. I'm going to have to wait a little longer. But before we have dinner. Good luck, you're on. Let's have some questions. All right, thank you. So how do we intangrel these logical qubits? That's a sort of Pandora's box. Yeah, OK. So Curaan is going to talk about it a little bit. Long story short, with CSS codes, there's a transversal C0. So if you can run a C0 between qubit k in block 1 and qubit k in block 2, and you do that for all k, then as long as you've got two blocks of the same CSS code, then all of the logical qubits in block 1 become entangled with all logical qubits in block 2. And that's for block to block entanglement. If you want to generate entanglement within a block, you can do that by hook or by crook. So there are some codes where, actually, OK. I should have got this paper done before I came here. But we have a paper coming out in a little bit where we have a code on eight qubits that sit on the vertices of a cube. And if you flip the top face, it does a logical C0 between two of the logical qubits. So that kind of thing happens sometimes. There are also gates that you can do in, well, OK. Some people call this fusion-based quantum computing. Some people call this lattice surgery, although it doesn't always involve a lattice. But if you can measure, projectively, a joint logical operator, then you can project the system into a subspace. Well, OK. And if you begin with an extra stabilizer that you don't need, you can measure a logical that any commutes with that stabilizer. So then you have a different stabilizer and your logicals change. And you can perform cycles of those operations, always measuring something that any commutes with your current stabilizer, and gradually your logicals become different. So you can execute a logical Clifford by doing these repeated anti-commuting measurements. And you can do that within a block as well, provided that your code has reasonably low distance compared to its stabilizer weight. Otherwise, you're going to have to use really ugly circuits that propagate errors in bad ways, and it won't work. But for a distance 4 code with weight 8 stabilizers, it ought to be fine. Thank you. And sorry, one more question. So does symmetry play a role in finding these stabilizers that are not true? Oh, yeah. Any symmetry you can bring to bear will play a role in finding stabilizers. There are people that will dive to the most mathematical depths in search of ways to generate commuting stabilizers that produce a high distance code. So for example, if you want to fix the fact that the number of logical qubits here is constant, there's always two logical qubits, because there's only two ways you can go around the whole of a torus. In order to create families of codes with what's called constant rate, where the number of logical qubits is in a constant ratio with the number of physical qubits, which is a lot better than having just a constant number of logicals, you can use manifolds that have hyperbolic characteristics. So they're sort of like lettuce, where they get very curly towards the edge, because most of their bulk is near the boundary. And, well, OK, lettuce is not closed up into a torus, but if it was, then you would observe that you need many holes in order to connect all of that boundary to itself. And those many holes produce many logical qubits. Let's see. What were we talking about? Symmetries and how it plays a role in finding these different stabilizers. Yeah, let's see. Are hypergraph product codes really symmetric? I don't think so. Yeah, I don't know. But yeah, symmetries also play a huge role in finding logical gates. So if you want to show that there's some logical Clifford or non-Clifford, then advanced symmetries, other than just the Pali group and the Clifford group, can come into it for sure. Thank you. Thank you. Other questions? OK. Oh, you make me run. I was just wondering, is there any proven, I guess, difference in efficiency between CSS codes and other kinds of codes, considering the CSS codes stratify their X-stabilizers and their Z-stabilizers, compared to something like the five qubit cyclic code you mentioned? That's a good question. So there are, especially when you deal with mostly small codes, like we do at the office, you wind up finding that these codes are equivalent to each other under certain transformations. So for example, there's a, well, OK, so there's a way to construct the Steen code from a five qubit version of a cut-up Toric code. There's a construction of the five qubit code from the Toric code, where instead of, OK, some people are crazy. And instead of putting the boundaries here and here at 90 degrees to the lattice, they'll make a grid that's sort of like this, and then put boundaries like here. And if you do it right, I haven't done it right, but you can arrange it so that these four corners each have an edge coming in, and then there's one square in the middle. So it looks kind of like this. And that gives you a code with five qubits that's still topological. And it turns out to have the same stabilizer group as the 5.1.3 code. Is this important? Nobody knows. But yeah, should we be using CSS codes or non-CSS codes? Hard to say. CSS codes have, in general, probably a little bit worse rate, but the decoding problem splits up into two smaller problems. You can say where the Z errors are and where the X errors are sort of independently. And the influence of that property on doing actual error correction in practice, not well understood. There are some pros and some cons to it. CSS codes tend to have more logical clifverts. So they have logical C-nots from block to block that you can do transversely. Non-CSS codes don't always have those. So you wind up having to use more awkward and clumsy constructions when you're trying to get a logical C-not. Self-dual CSS codes where each Z-stabilizer has a partner X-stabilizer also have a transversal Hadamard. If I ran a Hadamard on every qubit, my Z-z-z-z would be replaced with XXXX and vice versa, but these operators would be generating the same group afterwards. And the logicals would also switch. So it would do a logical Hadamard. Non-CSS code, that's not guaranteed. There are theorems that you can do phase, the remaining Clifford. Nobody's favorite Clifford. As long as your generators are weight 0 mod 4 with CSS codes. So you can get all Clifford's transversal, and transversal gates are extremely reliable, especially when you're talking about transversal single qubit gates. Single qubit gates have huge fidelity, and if you need two or more of them to fail in order to produce a logical error, when you're writing out a logical computation and you have transversal single qubit Clifford, you can more or less ignore them. You can act like the error rate is 0 for the purpose of calculating actual failure rates in the device. So those are all arguments in favor of CSS codes. Yeah, and then I guess the argument in favor of non-CSS codes is you can save two qubits. But you'll notice, all right, 513 beats 713. But 1644 beats 513, because I'm on average using four physicals per logical, and my distance is one unit higher. So if two errors occur, I'm going to be able to tell. And it's like Donald Rumsfeld's unknown unknowns and known unknowns. Unknown unknown is a lot better than an unknown unknown. So if a weight two error happens in a distance three code, and you think a weight one error happened, then you correct and you put in some unknown logical operator, you've sabotaged the rest of your computation. You might be doing a million more gates, but you've destroyed your state very early on. And it's all wasted effort. You have no idea. With an even distance code, if two errors happen, then in principle, or with a distance four code, if two errors happen, in principle, that gives you a unique signature. You can say, ah, I don't know what to do. It's like getting an error message rather than having some thing that you think is not a real error you think is correctable. So you can stop the computation and start again, or at least post-select out those garbage runs from whatever subsequent computation you're trying to do on the output. And that can be used to increase logical fidelity a lot in ways that probably are not yet appreciated. At least I hope, because we're going to put out some papers on those. Got it. So to recap, CSS codes are just operationally and computationally just a lot more easy. Makes sense. Thanks. Yeah, there's also a lot of people that will use codes that are not quite CSS, but are locally Clifford equivalent to a CSS code. So if you see Clifford deformation, normally that's being applied to a CSS code. To make this stabilized, there's something other than all x's and all zeds. But you still get the same divvying up of error correction gadgets. And you still get the same logical gates. You just have to undo that weird Clifford at the beginning and then redo it at the end. OK, nice. So thank you. Any more questions? I think that's good now. Thank you very much, Ben, for your. I respect that this lecture.