 of things. That was formerly Honeywell Quantum Solutions. So now I'll be talking about, you know, Ben already set up a lot of this stabilizer stuff. Hopefully it didn't scare you off with all the mathematics. But I'll be talking about the basics of QEC. So we kind of reorganize the schedule a little bit. So it'll be him. You just, you know, give a talk and then it'll be me, then him, then me again. I'll have a lot of pictures. So a little bit less on the mathematics. Hopefully I'll give you a little break. So quantum error correction. So on the classical side of things, they also have to deal with error correction. But for the most part, for like cell phones and laptops and processors, they only do like several billion operations per second, only. There's a lot of transistors in your laptops and cell phones. But it's not at the scale, given the error rates of like transistors, it's not at the scale where for the most part you have to deal with quantum error correction. To some degree, you have to deal with your SSD hard drives as well as DRAM memory. They're kind of, you know, if the cells go out or the DRAM's kind of sensitive now, everything's getting squashed down. They have to deal with error correction. But generally you don't have to worry about it for everyday consumer stuff. However, for data centers, you start to have thousands or tens of thousands of processors. So that's where it starts to become important. It's common for servers to have the RAM being, using ECC RAM or error corrected code memory. So even common servers have error correction in them. And the main sources of error for classical systems is cosmic rays or decay from particles in the packaging of them. Actually, I think Los Alamos noticed an issue where like they had these high performance computers and they had a weird amount of error and turned out because they were like a mile above a sea level that actually being closer, having a thinner atmosphere was actually increasing the error rate. And as we approach, you know, exascale quantum computing for high performance computers, this becomes really important. They have to start worrying about, or not quantum computers, classical computers, they have to start worrying about how to deal with errors due to cosmic rays. As we know, qubits are really fragile. And they kind of interesting to compare what their error rates are compared to their classical counterparts. So back in the day, with the NAAC machine in 1946, where there was one of the first sort of actually programmable electronic computers, they actually experienced if you can do the calculation, I use some of those sources below, to work out that they have an error rate of one fault per 10 to the 14 operations. Effectively, a vacuum tube would burn out, I think, every day or every other day. And now, a day is already 2008, because that's where I could get the data for. You have error rates of about one fault per 10 to the 28 operations. So that's an error rate of 10 to the minus 28, is what we typically call it. We put it under the dominator. For transistors. That's pretty impressive. That's why your cell phones and laptops, for the most part, don't have to deal with error correction. Sometimes you get a blue screen of death, and that's actually from a cosmic ray. Not always, especially on Windows. But quantum technology nowadays is at one fault per 10 to the 3, or 10 to the 3 operations, or 1,000 operations. That's significantly worse than vacuum tubes, and that's kind of sad. Although, luckily, these aren't directly comparable, because we know that certain algorithms, we hope, have an exponential speed up compared to classical algorithms. So they're not completely comparable, because one's far more powerful for certain algorithms, compared to the others. But yeah, eventually, we're kind of living in the NISC era, as people like to say, this noisy intermediate-scale quantum computing. But eventually, we want to tackle these, what are currently non-tractable, difficult problems. So, for example, breaking RSA using Schwarz algorithm. You might need roughly 10 to the 12 C-nuts in order to do that. So that means you need roughly an error rate of less than 10 to the minus 12. Those are really small numbers, and that's kind of hard to get, even in hardware, there's certain real limits, both because of the accuracy of the control that you can have on the system, and just because of the physics. So you need something else. And the main consensus is that we need quantum error correction. So, as been mentioned before, quantum error correction is all about spreading information across multiple qubits, or q-dits if you get fancy, and using that to suppress noise. You might think of future large-scale devices, kind of like the Saturn V rocket, where the majority of this thing is just trying to get you to the moon. It's not actually the interesting science. It's over here in this return vehicle or lunar module, but most of the machine is trying to just get you there. And that's kind of like what quantum error correction will be for large-scale devices. A large fraction of it is just doing quantum error correction. It's doing fancy identities, as Ben mentioned. Yeah, so imagine that the user will supply some algorithm to our quantum device, which might be a hybrid system. Maybe it's a co-processor for some high-performance computer, but in general, this gets sent down, and quantum error correction will do its magic under the hood. And there's a variety of things that it needs to do that you might not be familiar yet with potentially like magic state factories, these Clifford and Poly operations on the logical level, like Ben mentioned, logical memory, and so forth. But 99% of it is quantum error correction. So quantum error correction, as I mentioned, it's about coding, encoding information over multiple qubits. And basically the name of the game is to, once you have this system that you're encoded, you're using the qubits as kind of a substrate for this other system that's built on top of it. And because you're building a system on top of another system, what you can do then is play a game where you can start trying to detect these faults and trying to mitigate it faster than they can corrupt your higher level system, your logical system. And that's the main idea of quantum error correction. It's kind of, it's effectively a fancy refrigeration where we fight errors. And now the screen went black, and that's fun. And now it's back. So experience and error. Luckily the system recovered. All right. So some basic nomenclature. So you have heard like logical operations mentioned, like by Catherine and others. But in quantum error correction, we tend to use that word for the encoded system. We call it the logical system. So we tend to call the actual qubits and the gates that operate not as logical gates, but as physical gates and physical qubits. And so yeah, logical qubits and logical gates are the things that we care about in quantum error correction. Also technically faults are about the lower abstraction level, the lower system. While errors are about what the user kind of sees. So on the logical level. So often though, even in the quantum error correction community, those words are kind of used interchangeably. But yeah, errors are technically, a fault can cause an error, but it doesn't necessarily need to. It could be a benign fault. So for example over here we see if your game is freaking out, the user is seeing something going wrong, some bad behavior on the higher level, your logical level of the system. All right. Some of you might be familiar with some basics of quantum error correction. Some of you might not. So I'm going to do the boring thing and go through the repetition code. But it teaches you a lot about how quantum error correction works and it's very simple. So it's kind of a good example. So the repetition code is really a classical code. We can apply it to our quantum systems. But on the classical level you really just have to worry about bit flips. So changing between zero and one by for example cosmic rays. Often you kind of think of like your bits going through some error channel that might have some probability P that will flip the various bits and corrupt your message. And so as mentioned often and redundantly already, the whole name of the game both in classical and in quantum error correction is redundancy. So a very naive and simple way to protect your information is just to repeat your message. So this is called encoding where we take our original message and we encode it into a larger system. So we for example on the classical side can take our zeros and copy it three times for example and our ones and copy it three times. And so it's kind of obvious why this would help us prevent errors or prevent faults from becoming errors on the due to bit flips. So for example we take our zero copy it three times, zero zero zero. That's the encoding process. Some error happens like we have a bit flip happening on the first bit. It flips it but then we use a scheme called for our decoding process so the opposite of encoding. So decoding is trying to extract the message that we originally intended to give. We use for the most part majority vote for this repetition code. So what's the majority of this three bit message? It's zero. And so you'll be protected against any one fault or bit flip fault. And you can see here once again a single bit happens, a bit flip happens on any of the qubits. You get back the original message and that's great. However if two bit flips happen that's not so great then the majority voting scheme will tell us the opposite message. So we'll get back one for example with this first example. So zero goes to one one zero and the majority vote says that the outcome should be one and that's not right. So we're not completely protected. We're only protected to a certain amount of faults. But effectively what we're doing is we're taking a channel that had a probability of p errors and changing it to p squared. So we're suppressing the noise by this bit flip code. And then of course we can just if that's not enough and the error rates are high then we can just make the message, you know, this encoding longer so we can do five bits. And that means that we can suffer two faults and still recover the message. Because you know the three other bits are the majority and their vote is stronger than the dissenting two bit flips and so on. And in general this length, the number of bit flips needed to change you from one message to another is known as the distance. And that's used in quantum error correction as well. It's an important concept that we'll use later on. But so yeah, the distance is just the number of single bit or cube bit operations that take you from one code message zero zero zero zero to the other code message one one one one one. And then the number of errors that you can tolerate is t, which is, you know, is just a little bit lower than half. So d minus one over two floored. So you can apply this equation and you'd get one, two and three, you know, from those previous examples. That's what t is. Yeah. And then in general, for the repetition code, the distance is the number of bits you have, but that's not true of general codes, even other classical codes. And then we can see if we were to keep on taking this scheme out to infinity, which is not something that you do, then we could suffer, you know, up to, not exactly, but up to half of our bits flipping and we'd still be able to recover. As long as we have a little bit more than half that didn't flip, then, then, you know, the majority wins and we get the message back. So this is sort of, this is basically what's known as a threshold. It's how does the code perform, a family of codes perform as you increase the parameter d out to infinity. And therefore you know that as long as the physical error rate is lower than that threshold, there's some large code in the family that can arbitrarily suppress the noise. Okay. So in quantum computing, often what we do is we kind of basically steal ideas from classical computing or classical information theory. And that's what we're going to do now. However, I didn't do the Q-bit thing. That was a colleague of mine and they wanted me to keep that in there, but whatever, in the title. Quantum is a bit different than classical because of quantum mechanics. And so it kind of holds us back in what we can do. And it's kind of actually amazing that we can still do quantum error correction given all the limitations that quantum computing or quantum mechanics puts on us. So for example, you know, we have not just bit flip errors or maybe burst errors, but we have, you know, x, y, and z errors as well as any unitary could be potentially applied, like these incoherent errors that's been mentioned previously. As well as your measurement can just, you know, be potentially bad and also initialization could have some errors. You can also have things like crosstalk where you have errors that happen on multiple qubits that are chronic correlated. You can also have leakage where it takes you outside of the qubit subspace and so on. But for the most part, we have only have to deal with poly errors. And we can see, we'll talk later about why that is. Yeah, just mentioning that, you know, there's many factors why quantum computing is so noisy, you know, due to the environment, due to trying to you know, finite precision in your ability to control stuff as well as potentially if qubits kind of talk to each other, that's not great. So, you know, everything, quantum computing is just inherently sort of noisy. If it wasn't that way, then we'd see quantum effects all around us. So we really need to isolate these systems in order to actually make use of the weird nature of quantum computing and quantum mechanics. We also have to deal with the annoying thing called the no cloning theorem. So that means basically for a general state, you can't create a perfect copy. If you've taken quantum computing classes, then this, you know, you would have heard about this pretty early on probably. But yeah, so we can't just copy the, like we did with the bits where you just copied made, you know, went 0, 0, 0 and then 1, 1, 1. We can't just copy that. We have to do something smarter and different. We also, of course, have to deal with measurement collapse. So if we measure an observable, it projects us to something, like Ben talked about. And it effectively collapse us to, you know, a classical state effectively for that observable. Well, you'll be in a definite state. And that's not good if we're doing a general computation. You want to have that, you know, you want to keep on having this quantum evolution. So you can't just collapse it. That would be bad. So you need to figure out ways around that. For example, measuring operators that don't collapse, don't learn about the logical information. There's other operators maybe you can use. And then annoyingly also, it turns out there are no-go theorems that say that for a given code, you don't have a universal gate set. So quantum error correction is all about, like, discretizing things, discretizing noise. And because of that, you have to discretize your gates. So you don't have a continuous set of gates that are universal. But there are ways around it. So you have to, that just really means you have to figure out clever ways around these no-go theorems. And we'll talk maybe potentially briefly about those. Let's see. Yeah. Oh, and also you have to, because everything's so noisy, you have to deal with the environment, the qubits, the interactions, you have to make sure you do all this in an encoded fashion. You can't just constantly, like, go to a logical qubit, sit around in that, and then go down, you know, back to bare qubits, do your gates, and back up. Because once you do that, you kind of have a giant hole in your whole scheme, and that's a place where, you know, your noise will happen, and that just gets, you know, mapped back to your logical space. So you have to do everything in an encoded manner. Then already mentioned these, using Clifford's in order to go from one sort of code to another code. So using these encoding circuits. So this is the way around the no-cloning theorem. So we're not going to copy the state. Instead, we're going to coherently evolve it so the information spreads across other states. So for example, we start out with our qubit that's in some, you know, complex combination of zero and one. And then we use this C knot and this other C knot in order to spread the information. So we know with the C knot, we have, you know, these set of rules. We can either use, you can think in terms of how it changes the polys, which we no longer have than wrote that previously, or you can think in terms of these states. In quantum error correction, we tend to think in terms more of the Heisenberg picture with the poly operators and whatnot. But we can see that if the control qubit is in a one, then it'll flip the target qubit. So given that these states over here start out in zero, that means that this will only flip, you know, the other state for the component of that initial state that's one. So you get out this thing. So you go, instead of having like, you know, this code, or instead of doing something like, if you were to copy, you could do bad things, like, and we'll just call this psi, and get a tensor product. And that would be kind of like, squeaky chalk. That would be sort of like doing the repetition code with copying. But, oh, I went backwards. But we're not doing that. We're enlarging the basis states. So now, I'm using the bars again, like Ben did, now our logical zero is zero, zero, zero. And our logical one is one, one, one. But we're coherently entangled between these bigger states. The information is no longer stored in an individual state. It's spread across these three qubits. Let's see. All right, so we also would like to do something with these states, not just protect them. And we also need to, yeah, think about the logical operators. So a simple set of logical operators for these states, these stabilizer states is to think about the logical x, y, and z. So logical x basically acts like a bit flip. So what's the logical, oh, okay. I did my animations in a different order. And logical z adds a phase to our state. So how do we get an effective logical x for our logical qubit? We just apply x three times. We see that it goes from our logical zero, which was zero, zero, zero to our logical one, which is one, one, one. So this is our logical operator. Logical qubits are just like any other qubits. They just have, they're just encoded qubits on top of other qubits. So these are just like, they're exactly the same effectively as our normal x poly operator, just on the logical level. And then we can see if we either apply three z's, like we applied three x's to these qubits, we get a phase change because an odd number, you know, each z will give you a minus one and z, or minus one times itself three times will give you still minus one. Or if we just apply z once, it will change the phase. So either a weight three thing, where weight means the non-identity part, like how many non-identity things do you have, or the weight one thing is our equivalent logical z operators. And also I'll share these slides out so you, so you don't have to rush madly to write this down. Unless you want to. And then just like y is equal to ix, logical y is equal to i logical x times logical y. So you get something like this. For example, if you use the weight three version of the logical x and logical z. Okay. So how do we avoid measurement collapse? I already mentioned kind of hinted at it previously. We measure observables that commute with the logical operators. So if it commutes, that means that we can kind of measure these things independently without disturbing each other. So we see that if we measure the observable z, z, i, or i, z, z, both of these things commute with these logical operators. I mean, obviously it commutes with the logical z operator because it's made of z's or identities. But the x operator, we see that it anti-commutes twice, which is the same thing as commuting. And people understand what I mean by like commuting and whatnot. Okay. See a lot of heads shaking. Good. Okay. Cool. And so we can see also if we were to measure the logical zero or logical one, we get a plus one out for both of these observables. Also for any superposition of these logical operators, you'll get a plus one out. So whenever we do the encoding circuit, we're preparing a state that's a plus one eigenstate of these observables, these parity or these stabilizer generators, like Ben mentioned, or there's sometimes called parity measurements because you're effectively measuring the parity of the number of times things anti-commute with these individual's poly operators. So you can see, for this general state, we'll also end up measuring this. And so it won't project us. It will remain the state. It won't disturb the information in the state. So you can either think of in terms of whether the operators will anti-commute or whether, you know, if we measure these observables, will they project us to something else and they won't. So this state is good. So Ben already showed kind of a circuit like this. Here is, I won't go too deeply into these circuits. In my second half, I'll basically go through this in a different way. But just to give you an idea of how might you measure these observables, what you often do, and this is not necessarily the only way, but you often include an additional encilla and then entangle it using CNOTs or some other operation. So this is the circuitry that you might use in order to measure this observable ZZ. And then we'll later explain that in more detail why that works. And here's more circuitry. I'll quickly go to more an abstract version of this. So here we see the encoding circuit at the very beginning again that produces our entangled alpha times 0, 0, 0 plus beta times 1, 1, 1 state. And then we use this circuitry to measure the observable ZZi and then we measure this observable, or we use this circuit to measure this observable here. So you can, we introduce these two encillas or it could reuse the encilla if we can reset it. We can see that if like an X error happens, then mentioned that like if you have a CNOT, if an X occurs on the control, then it'll propagate forward and branch like this. So you get an X, X. And if it goes down, if it goes to the bottom, through the bottom wire, because this is a controlled X, it's either applies X or it doesn't, so it doesn't disturb it and just goes through. Okay, so using this, we can see that if this error happens, if an X error happens, it'll propagate through the circuitry like this. The X will hit that bottom measurement. And because whenever Z measurement, wherever you measure the Z basis and an X comes to it, it effectively flips the outcome. So you went from measuring what should have been a plus-plus outcome to a minus-plus outcome. So we don't measure the state directly, but we measure observables that can help us infer what might have happened. There could be other error combinations that could lead to the same like change in these measurements. Sometimes these measurements are called syndromes, but it's kind of like you have a disease and you want to know the syndrome or the symptoms that the person's experiencing. But yeah, these help us infer what's going on. And this is exactly the same picture, but I abstracted everything away and ignored the circuits. So we see if an X happens, it will anti-commute with this observable. Ben already went through this, how the new stabilizer gets updated by doing USU dagger, so we just apply X to this ZZ operator and so it anti-commutes. So it just flips the sign of the measurement. So it went from plus one to minus one. Yeah, and that's just showing if you had like two Xs, it will anti-commute twice and so you'd still be plus one. But anyway, so if this X happens, it anti-commutes with this observable so it flips the outcome and so we get minus one plus one. So I'm just showing two different ways that you might reason about this sort of thing. It's probably easier to just think in terms of diagrams rather than worrying about how you might implement it. Anyway, if an X happens in the middle, it'll anti-commute with both of the observables, so they'll both be minus one. So that kind of pinpoints to, okay, this is the most likely error. It's most likely that a single weight error happened rather than maybe a weight two or weight three thing happens that triggers this. And likewise, kind of the reflection of the previous example, if you apply an X to this, keep it over here and anti-commute so you get plus one minus one. And so from this set of results, given the most probable thing is that single errors happen, or single faults happen, see I'm already switching it around, you can build up a table. And so given the parity results, you can determine like this is the most likely correction that you should apply. You don't necessarily know what the environment threw at you. It could be a weight two thing that gave the same results, but this is the most likely thing that will deal with the faults. So in quantum computing, that's another thing. Everything's probabilistic. That includes quantum error correction. So you always have a probability of fixing things. You never can know for sure whether you did it or not because you don't know what the environment did. This is showing just an example where, okay, if two X's are applied, anti-commuting twice with that observable is overall commutes, but it anti-commutes with that left side. And so if you were to look at this table, oh sorry, I've switched from, in the table I'm using zeros and ones, so zeros for plus one and one for minus one observables. So just a different way of encoding this stuff. So if you're using bits, you usually use zero to mean plus one and one to mean minus one. Anyway, so if you look up in this table, this minus one plus one, which is equivalent to one zero, it says to correct the qubit over on the left hand side. If you do that, oh, you just applied the logical operator. So it flips the outcome. That's not great. So, yeah, you, yeah, so that's the problem that you don't know exactly what the environment does. Sometimes the environment does the less probable thing and you end up screwing up the system, but there's no way to tell the difference. And then of course, if a single Z happens, a Z fault happens, then it adds a phase. It's a logical operator. So everything commutes with it. You don't even get a sign that something might have gone wrong. And so that's not great. This code is a classical code, so it can only deal with bit flips or you could do a version where it only deals with phase flips. But for quantum codes, you need to deal with both phase and bit flips, and this code does not. So it's still the same as the classical analog Z. And plenty of time. And then if we want to measure out the system, so we're doing everything in a way that is non-destructive to the qubits because we want to make sure that we don't project it into a classical state and keep on doing computations with it. But in the end, if you want to do effectively the logical version of a measurement that's destructive, then we can just measure the bits in the Z basis and it effectively collapses you to the classical version of the repetition code. I mean, this was a classical version, but in a quantum system, but this will make it an actual classical zeros and ones. Yeah, it'll collapse you to classical repetition code, but you can still use the same sort of lookup table for the classical version. This time, by taking the parody of those bits, taking the parody of bits is the same thing. I don't know. Taking the parody of bits is the same thing as taking the XOR of bits. So the XOR is... So if we have input A, input B, and the output, it basically gives you the parody. So if both of the inputs are the same, then you get a zero. If the inputs are different, then you get a one. So this is the same thing as the parody. If you get an odd parody, it says yes. If you get an even parody, it gives you an even number, zero. So you can take the parody of the same bits here and here and here and here associated with those qubits and use the same code where X just represents from zero to one or one to zero, and you'll arrive at the outcome. So here's an example where we take the measurement and we deduce the bits by doing the XOR between the first two and then the second two. And then the corrected measurements is if we take those parody bits, we look up here what to flip. Like here, nothing happens. Here, we get one zero, so that says, flip the first bit and the measurement was one zero zero, so it says, flip the first bit here, you get zero zero zero. And then you can do the majority vote if you want or you can also XOR the entire string and you get zero and so on. So these rules turn out to be effectively the same thing as majority voting. It just kind of encoded in a different way. So you could have used this instead. They're equivalent. Oh, and there's the table for the XOR stuff. That's great. Okay, plenty of time. So now the short code. Okay, and this will be the last thing, really, that I'll cover. So it's all been pretty basic introduction to how common error correction works, as the title said. So we need to deal with X, Z, and Y noise. Well, Y is a combination of X and Z. So really, if we can deal with X and Z both, then we can deal with Y. So we only really have to worry, you know, X and Z generate Y, if you think in terms of the poly stuff that Ben was talking about. They're a generating set for the single qubit poly group. If we wanted to deal with Z faults, then we could have just done the phase flip version where we take the encoding circuit of the bit flip repetition code, and we just slap on Hadamards, because Hadamards change you from the Z basis to the X basis. And then the observables that you would measure would be XXXX. And that would allow us to deal with the Z phase flips. However, bit flips X's commute with these observables, so we can't detect them. So we're vulnerable to those errors, so obviously that's not the solution. And we'd have a similar thing if we did Y. We could do the Y version of this where we're measuring Y observables, but then you would be vulnerable to Y errors and not X and Z though. I'm not going to go into this. I think Ben might dabble with this in his next talk, but there's more explanation in both the QEC textbook that was edited by Ladar and Brune, as well as Nielsen and Chuang, but effectively for any unitary, you can kind of think of it as a complex, linear combination of the poly operators. Just like states are complex combinations of the basis states, these poly operators are effectively a basis set for unitary operators. And so once again, just like if you can deal with X and Z errors, then you can deal with Y. Well, if you can deal with X, Z, and Y, then effectively this is kind of like the different histories of the possible errors that can happen and you can deal with any unitary errors. So you can deal with arbitrary unitary errors. It's kind of hand wavy, but there's more proof in those books. I don't really feel like going through all the math. Okay, so how do we deal with both X and Z? So Shor, who's famous for the Shor's algorithm in 1995 came up with this Shor's code. I think Steen might have come up with the Steen code around the same time. I think maybe it was accepted to a journal a little bit later in 1996, but they're all dabbling in the same sort of idea. People were very kind of poo-pooed, quantum computing because of the heron and noisiness of it back in the day that this will never work and then the field of quantum error correction was worked out slowly, but they're able to find ways of actually doing quantum error correction so that dealt with those naysayers. So the Shor code, we've already dealt with the repetition code, so luckily I don't really have to explain much of it because it turns out that it's just the combination of the bit flip code and the phase flip code. Ta-da! So, yeah, as I mentioned before, to get the phase flip code, which is just where you measure the X observables instead, you just slap on Hadamards at the end. So we have the encoding circuit right here for the phase flip code. And then, for each one of those qubits of that original encoding circuit, we then apply the bit flip code. So we take those individual states and we then encode it into the bit flip code. To kind of see that better, so here's our original logical one and logical zero states here, and then we can see because of this to do the logical plus version of that, you'd have this state because in general, like the Hadamard... Oh, I was writing the Hadamard. Hadamard on zero is equal to the plus state, which is the same thing as the square root of zero plus one, which you've seen a bunch of times. Maybe I even wrote it. Okay, no, I didn't. Well, actually, maybe I did in a bit. But anyway, yeah, this is just, you know, taking this thing and substituting zero, zero, zero here and one, one, one. That's the logical version of the plus state and the logical version of the minus state. So if we, you know, started out with this state, then... Let's see, actually, I have some... Okay, yeah, if we started out originally right here, this is just the normal bit flip encoding circuit right here if we ignored the Hadamards, you get plus, plus, plus. So if we're trying to encode zero, it goes from zero to zero, zero, zero, and then applying the Hadamard, you get plus, plus, plus. So that's, oh, there's a definition of that. So that's kind of the red box there. And then if we apply these encoding circuits here, then it will just convert these pluses over to this state. So the three copies of this state, what we see down here. Okay, so it's a combination of, you know, this thing here and this thing here. We're doing... This is known as a concatenated code, or just concatenating two versions of the repetition code together. I believe then we'll talk a bit more about concatenation. Now, it's kind of funny in the Quantum Air Correction book when it's talked about, Dave Bacon is a famous QEC person. I had this quote. I guess he found it really beautiful, which is funny that it's in a textbook. Anyway. So here's the logical operators and the stabilizer generators. It's not really important. You can kind of see that it kind of looks like the repetition code. So we were measuring these sort of observables for the bit flip repetition code. So it kind of looks like a bunch of bit flip codes here and then with some x stuff down here. And then we can see if we... So these columns represent qubits and the rows represent the individual stabilizer generators. And so if we were to apply an x to the first qubit, then we can see that this stabilizer generator that we would measure for the short code anti-commutes with it. So it's able to detect an error has happened. Likewise for z, it anti-commutes with this stabilizer generator that we'd measure. Why? It anti-commutes with both of those things and so on. I didn't... Anyway, you can do all combinations and you can see that it uniquely sets off different stabilizer generators or different combinations of them. So you're able to detect these things and uniquely identify the weight one errors and therefore correct it. So it's a distance three code because it can detect one error. If it was a distance five code, it could deal with one fault. If it's a distance five code, it could deal with two faults and so on. Anyway, luckily, I had plenty of time for this talk. It was shorter than whenever I practiced it than an hour. So you can probably catch up to the schedule. But some other things that you might consider is constructing the decoder for the short code. It's not too difficult if you kind of look over how to do it for the repetition code. The same sort of ideas apply. You know, we need to not just measure these observables and come up with corrections, but we need to do some encoded operations. You might think about how does that happen and we might discuss that in later talks. How do we deal with faulty measurements? So if you have some probability of the measurement going bad, what does that look like? Do you need to change how you measure things and how you approach this? And so far, we've only been talking about noise faults that have been injected before the parity measurements that we've been making, these observables that we've been measuring. So we can see that they can deal with those things, but what happens if we think about noisy operations? So if the faults are injected inside that circuit, what does that look like? And we'll talk a bit about that as well. So stay tuned for Ben next hour. Are there any questions? Okay, let's take a break for a few minutes then.