 We have a series of lectures about quantum error correction and much like in quantum error correction where the information is cleverly distributed amongst several qubits, the information in this lecture is distributed amongst two different lecturers. So we have Ben Krieger and Kiran Ryan Anderson. And so they'll be doing tag teaming, giving one lecture after the other. So I think we'll kick off with Ben here. Thank you very much. The company also doesn't let us travel on the same plane. Well, okay, so whenever I'm lecturing, I'm reminded of like being in kindergarten and talking to kids who are in kindergarten. No matter what you say to these people, they stick their hands up. They go, why? Why are we learning this? And the job of the education system in the government is to beat that impulse out of the children. So they never ask why they're learning. So they just learn whatever it is you tell them to learn and then they regurgitate that information at the end of the semester. But luckily your experience, I hope, in government education institutions is now complete so we can resume asking why. And we can do it right at the beginning of the lecture so that you'll have an idea of where we're going, what we're trying to achieve. There's a double-edged sword to quantum computing. These devices have exponentially large Hobart spaces inside of them, which makes them really powerful. There's all kinds of stuff that you can do in that space as long as you can design some clever measurement at the end or turn your problem into a wave and apply a Fourier transform to it. But also that exponentially large Hobart space makes them really tough to analyze. Good luck trying even to find out what the effect of some unitary matrix is on a vector when you're dealing with 200 qubits or something. Even with arbitrarily large supercomputers, it's gonna be really tough. And if you can't figure out what one gate does, good luck designing an algorithm, good luck understanding data structures or any of the higher-level abstract things that we wanna do to create a real branch of computer science. However, not everything that you wanna do with a quantum computer requires its full power. In error correction, me and this guy, we are spending our entire lives trying to figure out how to get a quantum computer to do nothing. And it's very easy to describe that operation. You can do it with one symbol, right? And this thing is worth a lot of money. Because quantum computers can't do it. Well, they also can't do this. So you wanna prepare a zero state, you wanna do nothing, you wanna maybe measure in the X basis, none of these things are possible with fidelity one. You always get some operation that's kind of close to the X measurement of the zero prep or the identity. Given that the things that we wanna describe are all really trivial until we compose them into big circuits, and we're gonna have to add a lot of overhead, how do we come up with some theory of quantum mechanics that allows us to process non-trivial states like highly entangled states, for example, in polynomial time, right? And today we're gonna go over one way to get that done. And that way, people call it Goddisman knill after some of its authors, people would say stabilizer operations if they know about stabilizers. Most of the time I would just say Pauli's and Clifford's. Although, given that this is professional rather than academic, we don't settle on terminology. We just sort of juggle different terms around and if anybody gets confused, we leave them behind for the wolves. All right, so why Pauli's and Clifford's? There are many different efficient sub theories of quantum mechanics that you could focus on. Some people do tensor network states where there's a limited amount of entanglement. That's very good for condensed matter where things can be at weird angles in different bases, but the amount of entanglement is sort of area law limited. There's also fermionic linear optics or match gates. If you wanna get a postdoc, get good at one of these sub theories. You can even design error correcting codes in these sub theories, but why would we do it with these, Pauli's and Clifford's? There's a few reasons. The first is versatility. A lot of the problems that you see in quantum computing like finding the ground state energy of some Hamiltonian, you look up the Hamiltonian and you see it's a bunch of Pauli matrices. Some people in fault-tolerant quantum chemistry focus on a problem called cubitization where they take some problem and turn it into a bunch of Pauli matrices. Given that they're doing all that work, maybe we can sort of live nice, easy, lazy lives by just focusing on Pauli matrices. And of course anytime you see fermionic creation and annihilation operators, people would say, oh, do the Jordan-Wigner transform. Now it's easy, right? It turns it into Pauli operators. And then there's quantum errors and error correction. We'll see later in the next lecture that if you can correct Pauli errors, then you can correct a lot of stuff. And yeah, nobody believes that they're gonna get their physical error rates down below 10 to the minus five without some kind of protection. Even the topological cubit people are edging on error correction. They're doing everything but error correction. And I think they think they would get down to 10 to the minus six-ish. But if you wanna run an algorithm with billions of gates in it, then you can't tolerate thousands of failures distributed somewhere in the circuit because garbage in, garbage out, and everything's going to become a gigantic analog mess. And for this reason, we design codes using Pauli's and Clifford's in order to do a bunch of active error correction in firmware, basically. So it's a layer of the quantum stack. It's a term which I abhor that goes between the hardware and it sits just on top so that when you provide to the user or to the higher level routine or virtual operations. Right. Pauli operators can be used to detect Pauli errors. We're gonna see this in a minute. And there are these Clifford things that can be used to transform Pauli's into other Pauli's. And the machinery for moving errors onto ancillary apparatus that can detect them is all very neat and tidy. So yeah, basically ease of use is one of the main reasons for this. And all we're gonna need to know to start is a little undergraduate quantum mechanics or maybe first year masters, a little group theory, and then we can get going really. Okay, so let's do quantum mechanics in like 15 minutes. Chapter two, Nielsen and Twong. If you're not reading Nielsen and Twong, you're doing it Nielsen and wrong. All right, so I should have started my lecture with a joke, right? Variational quantum eigen solvers. Okay, that was a good one. All right, pure beef. We can represent quantum states using vectors. This is a postulate of quantum mechanics or like an axiom, which is a fancy word for something I will say without knowing how to prove. And that, you know, looks kind of like some normalized way to vector, no big deal. There are operations as well, which are usually unitary, you know, inshallah. You could have, for example, something like so one, yeah, square root of three minus the square root of three and another one. You could apply such a unitary to such an operation and it only really gets complicated when you start dealing with measurements. Okay, so let's deal with measurements now. These are gonna be operators which are observables called O that will decompose right there. I think everybody's familiar like these are Hermitian matrices. They have real eigenvalues. You can diagonalize them. Anybody who's not familiar with this should like pull the parachute right now and get sucked out of the lecture because you might get lost later. Although it's okay, you're young, you can sit around and be lost for an hour. It's fine. But you would decompose one of these into so real valued eigenvalue and then these projectors pi j. And when you perform a measurement, here is what happens according to the postulates of quantum mechanics. You would receive an outcome lambda j. Sometimes multiple lambda j's are identical and so you won't know which j corresponds to which lambda j. So you can project into a subspace rather than projecting into a state. But you get out these lambda j's and you get one out with probability pj equals whatever your initial state psi was that you're doing the measurement on. Projector psi. So basically this is like an inner product between a rank one projector psi and this projector here. So you can think of it as like how much of this state is in this space? Basically, because it defines a norm. And then psi gets mapped to right under the action of this projective measurement. When you discover that it was lambda j, you get pi j psi. Which is just the projection of this state into this space divided by square root of pj. And all this term does here is normalize the state again. So this is like conditional. If you were assured that the measurement had occurred but not told the outcome, you would get a mixture of these things. And it would be a convex combination with, scale this pj, all right. We are not going to use the Heisenberg picture, but I have to tell you about the Heisenberg picture anyway. All will be revealed, no worries. Let's imagine like a really complicated experiment in which some state is prepared, it's then exposed to some unitary evolution and then we measure out some observable. Can everybody read my handwriting by the way? Even people in the back are nodding, okay, we're good. Right, so what's gonna happen? The only things that we observe are these eigenvalues that come out of the end here, right, these little lambda j's. And they each show up with a pj where I have to stick in the evolved state now. So I'll conjugate the evolved state psi u dagger, right. So pi j u psi. And then the projection takes u of psi, right. This thing maps to square root of pj. And I can just rub this thing out and stick the dagger on there. And now we see what happens to the initial state. It goes to u dagger pi j u psi, okay. So two things could have happened. It may be that psi evolved to u of psi and then got measured, or it may be that the projectors, right, and therefore the operator itself sort of evolved backwards, right, under this u dagger pi j u and then was measured against psi. And because the outcomes of physical experiments that obey the laws of physics don't distinguish between these two cases, they're like equivalent. And so you can use operator evolution in order to say what has happened in some physical scenario rather than having to evolve vectors. We're gonna see an instance of this again, although it's not quite the same as the Heisenberg picture. Although the paper that this comes from is called quantum computing in the Heisenberg picture or something. At any rate, we're gonna start evolving operators in a minute. Apologies to the masters in HPC program. You should all go get a masters in HPC, but I need blackboard space. So it's going down. HPC criminally underrated topic though. So now that we've covered quantum mechanics in 15 minutes, let's do like 10 minutes of group theory. This is what it's like being in quantum computing. You wake up and you come to work not knowing whether it's gonna be algebraic number theory, atomic physics, this stuff, and you just sort of take whatever the day has to offer. So if you're kind of good at a lot of different types of math, apply for a job or something. Now, again, I'm not a mathematician, so I'm gonna start with a punchline. Group theory is a way to cheat at matrix multiplication. Using group theory, you don't have to write out all of the elements of the matrix in order to calculate matrix products. You can just look it up in a table. And often, or for big enough matrices, that can mean you take polynomial time or like linear time even. So like a professional's polynomial, not like n to the 10 or something, rather than doing exponential time. But a mathematician would say, a group is a set endowed with an operation, et cetera, et cetera. Well, okay, we have to do that now. So a group is a set endowed with an operation. There's some set g with little elements g, and there's an operation which will take two elements of g and give you a third element. And typically, you write it with a little circle or you don't write it at all. So I would say something like g prime, g double prime equals g triple prime, right? And you can think about it like multiplication, even though for some groups, it acts more like addition. And the point of group theory, right, is that you're abstracting away which mathematical operation it is, you're just talking about its properties. Right, so every group that i times g equals g times i equals g for all g in the group, and you have to have an inverse g inverse such that g inverse g is g, g inverse is identity. Okay, group theory has a ton of applications. There are a lot of different groups. You can do like Rubik's cubes with group theory, you can do robot arms, you can do cryptography, you can do whatever you want, but we're gonna look at a group with four-ish elements, which is just some Pali matrices, okay? So our identity is gonna be this two by two identity matrix. We're gonna have x, y, and z. We've already seen these matrices pop up a ton of times, so the fact that we will no longer have to multiply them at the end of the lecture, right? It's gonna be very advantageous. Anytime you see one, you'll be able to start thinking about it in linear time rather than trying to remember where the minus i goes, for example. Don't worry, that one's correct. And we will see that these things form a group because they're closed-ish under multiplication, so for example, x, y is equal to da, da, da, da, da, you'll see, look at how tedious this is, right? I already know the answer, but I can barely like, I'm writing so much. You can always define the group operation that you're going to use to be multiplication where I delete the i afterwards. That's fully mathematically legitimate, right? It provides a map from two elements of the group to a third. It's a group operation, deal with it. And then, let's see, we can do a whole table. I forget where these tables are called, you can make a whole table of, ah, let me do, the little circle means I'm deleting the phase, yeah? So I can take i x, y, z, i x, y, z, and I get identity, identity, identity, identity, what are y, z? And then this thing is going to be z, y, z, x. So now I no longer have to multiply these matrices whenever I see them, right? If I've done the work once of showing that this is closed under multiplication, then there's only a finite number of things that it can be, and I can look them all up in this table. Which, yeah, okay, I can see that you're not rolling in the aisles, you know, yelling hallelujah yet. It's not very impressive that I know how to get rid of two-by-two matrix multiplication. But don't worry, the next page, these notes go on. So we will also be able to do this for tensor products of poly matrices. Oh, I left the cloth over here. I don't even need it, actually. Let's just make another weird ugly line, do tensor products. Did I already say that I would make these notes available? So if you wanna write things down, you can, but everything I'm saying is roughly in here. So writing is optional if you're just not that kind of person. Okay, so a tensor product is how you describe mathematically operations that occur in parallel, and it's easiest to see what the properties of a tensor product are if you do it diagrammatically. So I'm gonna use this funny symbol, so who's seen the O-times before? Latek O-times? Most people, fantastic. Callum saw, yeah, very useful that you've seen. It's the target audience, guys I work with. Keep your hand down. So A tensor B is represented by, right, A, B. So it's when I have a composite system, it's got two subsystems in it, I do the operation A to one and B to another. And you can see really easily one of the properties of the tensor product, which is how it composes with the series product. And it's so important that I gotta write it high up, but then I'm like leaving real estate, who cares? All right, so A O-times, B C O-times, what these things are called. It'll be correct in your notes. So if I join these wires together, we can see right, just by sort of changing the bracketing, this is A O-times, B C O-times, D, but if I instead, oh, like if a tensor product wore pants, would it wear them like this or like this, right? If I instead do this, you can see that this is equal to AC times B D. Where time runs in the opposite direction or whatever, but it doesn't matter, because the names are consistent from here to here. And this is a very valuable property because it allows us to, right? Anytime I can multiply two by two power matrices and get the answer just from table lookup, I can do that here and here, and if I had a longer tensor product, I could do the same thing, and it would just be a linear number of steps rather than some exponentially big matrix. Super convenient. Gonna go all the way over here to introduce a very tightly related property, because I'm out of board, but that's also okay, one of pro tips. If you have eigenvectors, and you take a tensor product, right? So it's A tensor B, and then I feed that into the operation here, then I get the product of the eigenvectors, AB. So anytime, oh yeah, and by the way, people often write tensor products just inside the cat, just a big list of symbols. And that way you can do stuff like, it's just a sub everything, right? Your theory of quantum mechanics includes a set of states, a set of operations, and a set of measurements. So we're gonna take a subset of the states, a subset of the operations, and a subset of the measurements. And well, okay, you could, any old time you do that, that's a sub theory, but is it efficient? Well, if you can describe all this stuff that I'm erasing right now, using a polynomial amount of space and time, then it's efficient, and we will see how to do all these things. You can go read about this, if you want to. So, let me recommend some reading over here. Daniel Goddisman's PhD thesis. It's from 1997, but it's still like the most up to date reference on like the basic theory. Let's see, oh, five, two. Yeah, and if any of you have to write a PhD thesis soon, that's more or less how it's done. And you can also go read Aarons and Goddisman 04. Actually, if you just go to Daniel Goddisman's website at Perimeter, there's like a big list of everything that you should read instead of listening to me. But I won't tell you where that is right now. You can learn later. Focus for the time being. All right, so, if a poly is a matrix, how do I describe a state using that matrix? Aren't states vectors? Sure, but I'll just say that the states that we're gonna use are gonna be a plus one eigenstate of some poly operator. And if I do this for one qubit, then x times the plus state that we saw earlier is equal to plus. And, you know, for y, there's something like something, you know, plus i or i plus, which is equal to, you know, one over root two, one i. Let me make sure that the dot, okay? And then z is maybe the easiest to describe because it's the zero state, right? One zero. And if I want, over root two. And likewise, I could put a minus sign here or here. If I put the minus sign on the z, I get the one state, which when you multiply by the z, you just pick up that 180 degree phase. So, this works perfectly for single poly matrices. Let's do it for tensor products. What we notice is if you take, you know, x tensor x on plus tensor plus, all right? Then you get plus plus, no problem. But, x, x acting on minus minus, notation is getting continually more compact, is gonna be equal to minus one squared, you know, times minus minus, which is just equal to minus minus. So, if I have a single poly that acts on multiple qubits, it doesn't specify a state on its own, it specifies a subspace. How do I say that we should solve that problem in the notes? Ah, first, we should talk about how big this subspace is. You know, if you wanna keep specifying states that are in that subspace, maybe get it down to one state, hopefully, then you gotta know how big these subspaces are. All right, here's where it gets a little bit tricky. I'm going to denote the set of eigenvalues of an n qubit poly lambda m for a minute. Every one of these poly matrices has a plus one eigenvalue and a minus one eigenvalue, plus one eigenvalues, and like minus identity, which has two minus one eigenvalues. But, I can divide this so if I picture this as being a giant tensor product, and I know that the eigenvalues are scalar products, I can pop the first term off the tensor product and say that this set union with lambda for every lambda, you know, in lambda n minus one. And if you try a bunch of examples, you will quickly become satisfied that each one of these sets, for every time there's a plus one, there's a minus one and vice versa. And so, each one of these sets is just gonna be half plus ones and half minus ones, unless you do all identities or all minus identity, right? The all identity operator will have all plus one eigenvalues and then minus identity will have all minus one eigenvalues. So, each one of these operators cuts the space in half, right? So now, with your little CS brains, fully active, filled with potato salad and caffeinated, you're thinking, aha, I'm gonna use a logarithmic number of powlies, right? Each one cuts the space in half, so in order to go from two to the n down to some constant, I only need something like n of them, rather than two to the n, which is what the number of coefficients I would need in order to write out the ket. And you would be right, because you are a fictional version of yourself that lives only in my lecture notes. All right, and there's, all right, so when, right, so this power acts on the state like so, we say it stabilizes the state and let me erase some more stuff. Look at me checking the time, like I'm not just gonna go until I run out of notes. Anytime you've got two operators that are both stabilizers, we'll call them S, J and SK, right? And it's gonna act on some psi. If psi is stabilized by SK, this is equal to, you know, SJ times psi, and if this, you know, operator's also, if this vector is also stabilized by SJ, this is equal to psi. So any state that's stabilized by these operators individually is also stabilized by the product, right? So we don't have to keep track of every possible Pauli that could be stabilizing our vector, most of the helium, which means all the elements commute, right? And the reason that no anti-commuting Paulis are allowed is because with this product relation, if we had two Paulis in here that anti-commute, you know, if SJ, SK, let me, you know, so minus SK, SJ, I could start taking products like SJ, SK, SJ, SK, switch these two and I get minus SJ squared SK squared. The product of every tensor product of Pauli matrices with itself is all identity, you can see on the diagonal of this table. So this is equal to minus identity, which as we said, does not have any plus one eigenstates, so it can't stabilize anything. Right, so anti-commuting Paulis in reduces to your stabilizer space is zero dimensional, go home. So no anti-commuting Paulis allowed, and if you're gonna write down a generating set, they should all be independent under multiplication. We will see quantum error correcting codes in which they are not. And there are reasons why you would measure those that are redundant, right? Just in case there's a measurement error, you might wanna say, oh, you know, I have multiple different ways to reconstruct my generating set so I can see whether they're all consistent. But if you wanna accurately describe SJ and SK, already stabilize the state in question. Perfect, so that's how you describe states. It takes about n squared bits of memory, depending on how many bits per letter you use. And let's see, it's the 18th of April, very useful information, it's also about 206. I think we can afford one example before we move on. Who can tell me a legit non-trivial Pauli that stabilizes the state? I joke around, you don't get to joke around. My bag is secure, your career hinges on this. Z, Z, okay. So we multiply Z, Z on zero, zero, minus one, sorry, plus one squared, or minus one squared, right? That's stabilized the state very good. Do we need any more? Or is Z, Z the only stabilizer of this state? Y, Y, someone came to play today. But I'm not gonna put Y, Y, I'll put X, X. Why am I allowed to do X, X instead of Y, Y? Yeah, because when you multiply them, you just generate whatever you want. And you'll notice that I've ignored the I, because there would be two I's that make a minus one, right? So if you flip, you get one, one here and zero, zero here, and then the addition is commutative, and that gets you back your original state. All right, good. People are roughly getting it, fantastic. So now we're done describing the states of our efficient sub-theory. Let's start going from state to state with unitary operations. So we will consider U acting on psi. We might pray that, you know, there will be some other stabilizer that stabilizes this state. And I'm gonna guess that there is and then there is, right? So let's say, let's guess that there's some Sj prime that stabilizes U of psi. Do, do, do, do. That means that Sj psi, you know, by, you know, by hypothesis is equal to U of psi. However, we know psi is stabilized by something, so we can stick a stabilizer in here, common proof technique. We can also stick an identity in here when I was doing the Heisenberg picture, because these operators are evolving forward in time. So it's a Heisenberg-ish picture, right? Also, do these stabilizers commute? Do they generate a group? Let's prove that they do. Let's do generation first. So U Sj U dagger times U Sk U dagger. Everyone can already see where I'm going with this. U dagger U is eliminated and this is equal to U Sj Sk U dagger. So anytime I take the product of conjugated stabilizers, that's the same as if I conjugated the product of the stabilizers. So, you know, this is a stabilizer of the new state because the product was a stabilizer of the old state, something the products of stabilizers, I still got stabilizers. My new operators form a group, just as my old operators did. And, you know, because products are preserved, group commutators are preserved because those are just fancy products, so it's abelian still, fantastic. But maybe not necessary condition that I will nevertheless use today is that these operators, U Sj U dagger, also have to be Pauly operators. So I'm looking, so all right. We've got the group homomorphism property, mathematicians, I said homomorphism, that you will want to know about in your life. At least if you're here, I mean you don't need to tell your family over the holidays or whatever, unless they're also involved. But, let's get in. So, name, diagram, unitary matrix, just one last time, I'm gonna write out unitary matrices. And then we're done with unitaries. And, eh, eh, so Pauly's, oh yeah, okay. One of the things that you would need to say about Cliffords is that, okay, maybe I shouldn't give you these notes, but I should update them first. Because of this property here, whenever I take the, whenever I wanna calculate the effect on a product, I can just calculate the effect on individual generators and then take the product afterwards. That means that in order to specify a Clifford fully, I don't need to know all four to the N possible things that may happen if I stick some arbitrary Pauly in. I only need to know the effect of the Clifford on a set of Paulys that generate the Pauly group under multiplication. And that generating property gets us down from an exponential number of Paulys to a linear number of generators. So, when I complete these tables, now you will understand why they are complete, okay. Pauly maps. So, C naught is a Clifford. Some people put an X here. I usually put the bullseye thing. Notation is not consistent and you know, if you don't like it, tough. Unitary matrices are consistent though, right. So in the zero sector, where we have the identity in the one sector, we have this X to control the X operation, but we quit writing these things and we begin writing these things. So X tensor identity maps to have classically as being if this then do that propagates information right in this basis from target up to control. And this is the reason why if you surround a C naught with Hadamard, you get a C naught running in the opposite direction, which shows up a little more visibly in these Pauly tables. Now speaking of Hadamard, Hadamard is also a Clifford. Luckily notation for this one is fairly consistent. Last time I'm gonna write that unitary matrix, fantastic. And this one takes X to Z and Z to X. And it takes Y to minus Y. You may or may not need to use that minus sign. Watch yourself. And then finally, there's a gate which has no fixed name. Some people will say P, you can say R. Right, in this set of notes I've called it S. And it is, well okay, you can think of it as the square root of Pauly Z. Pauly Z sometimes also comes with a factor of two. So you always have to watch yourself. You write it as an S or a P or an R, write one I, zero, zero is the unitary matrix. And this one takes X to Y and Z to itself. These three gates as you compose them can generate any Clifford. Will I prove this? No, it would take too long. But let's do a pretty involved example. I have 15 minutes left and we still have to do measurements. I'm gonna run long right now. But that's okay, everyone's having fun. I've stayed focused, I haven't goofed around. So let's prepare an entangled state. X, X, X, X. I'm just gonna write in a big table to try and find a Clifford that does this. There is an algorithm for doing this. Will I tell you what the algorithm is? No, I'm gonna do what you're gonna do when you start working with Paulys and Cliffords and just try some stuff. So I know, for example, that there's entanglement in here that I'd like to break, right? And you can tell when one of these states is entangled because it has weight greater than one, stabilizers that intersect each other. If ever you see one of these tables that's block diagonal, like there's some stabilizers up here and some stabilizers down here, but then identities in the off diagonal, then you're dealing with two unentangled blocks. Maybe you can prove this to yourself. We don't have a ton of time. All right, but let's take a look at what would happen if I ran a C naught between qubits one and two of this state. So in order to figure out what happens to XX, I multiply the outputs of these two, right? So XX times IX is XI, right? So my XXXX goes to XIXX. Let me say what's going on here. C naught one, two. ZZII, so in order to, well, let me be real fancy. C naught is self-inverse. C naught times C naught is the identity, which means if I read this table backwards, it's legit. ZZ goes to IZ, so IZII. Nothing happens to this Z or does it? Now it goes to ZZII and IIZZ. Yeah, stabilizers in order to generate a new generating set. So I'm gonna multiply this by that in order to put an identity here. And now I can see that I have a one by one block, right, where I have identities in that row and that column. So this qubit is separated out. So I can derive, and you can run this circuit forwards for yourself, that if I do this, what I get out is this stabilizer state that I want. I encourage you all to try this as an exercise in order to keep up. I mean, not like right now, but you know, snap a picture, take a note or whatever it is you do. And that also turns out to be a pretty important state in quantum error correction that's called a cat state because it's a superposition of two macroscopically different bit strings. Right, so now that we've seen a Clifford circuit, I'm immediately gonna start using them as an abstract proof technique because we're going from like zero to 100 today. Oh, we started a few minutes late. I get extra time. No one can stop me, right. Let's imagine, won't you? Let's imagine a cloth that I can use to wipe down the board. It's just like dealing with my kitchen. I have a five meter long table and the thing I need is always the other end. Let's imagine that just like that last circuit, which is all the way over there, we've got some abstract Clifford C that we're going to use to prepare a stabilizer state. And at the input, we're gonna put a bunch of weight one Z stabilizers. These are like little zero states and we're gonna prepare a big cat over here. What if I was to leave out a few of these? Well, then there's a whole set of stabilizer states that we could be describing. If we put a Z in here, that would be one state map, to produce logical qubits at the output, whose operators are like the stabilizers that would happen if I put a Z or an X in here, those become your logical operators and they have the same commutation relations as the Pauli's because they're unitary equivalent to the Pauli's. And so they act just like Pauli operators but on logical qubits. I'm looking at the wrong set of lecture notes. All right, fantastic. So when you wanna do a table or tableau, a lot of people say tableau. You have to, I mean, it's not enough. If we said table, it would be too like English and low class. You gotta have a tableau. So let's talk about one of my favorite codes. We will have two stabilizers called SX and SZ, Z. And then we'll have some logical operators. X bar, the bar denotes a logical operator, R2, and Z bar, two. This thing is gonna be X, X, I, I, Z, X, I, X with the other two. For example, and any one of these logical Xs or logical Zs, it has the appropriate commutation relation with its other powers. The hardest part of dealing with the stabilizer sub-theory. Luckily, we already detailed that any tensor product to Pauli matrices has plus one eigenvalues, minus one eigenvalues, and that's it. And that means you can describe the set of projectors as one or identity plus P over two minus. Identity minus P over two. Each one of these things is gonna be like pi plus and pi minus. Those are the only two outcomes you can get from a measurement. And when you do a measurement on a stabilizer space, what happens depends on the commutation relations between the new Pauli that you're measuring in and the existing stabilizers. Let's do the easiest case first. If the Pauli is in the stabilizer group, then the probability of getting plus, right, is just psi, I plus P over two, psi. But we just said, okay, so stabilizer. P is a stabilizer, right? So P psi equals psi. Identity of psi equals psi. So this is just equal to one. So this is just equal to one half, you know, psi psi plus psi psi plus psi psi, right? One plus one is two. So the probability of getting the plus one outcome is one. And it should come as no surprise to you that when you project a stabilized state onto one of the stabilizers, you just get that state back, right? It acts as the identity, both as a measurement and when you apply the operator. I think even in like software that I've written, I would just skip measurements sometimes if I can. All right, so we don't know what the probability is a priori, right? Because P here is some new operator, psi can be some arbitrary superposition of eigenstates of these operators. It doesn't have to be plus one or minus one with respect to any of them. But we can talk about the output state. So our output, psi is gonna map to identity plus or minus P over two times psi. Ah, square root of PJ. Sure, it's normalized. Why not? S times I plus or minus P. Here we get lucky. S commutes with I, S commutes with P. And so it commutes with I plus P. I can move it through here, S stabilizes psi. And so psi maps to a state that is still stabilized by every S in the stabilizer, right? However, there's a new operator in town. Well, okay, there's almost a new operator in town. If I put P here, right, then P, okay. I plus P is equal to P plus I. Yeah, is equal to I plus P. But P times I minus P. P squared is equal to the identity, right? So this is P version of that Pauly is going to join the stabilizer group, right? You're projecting the state space down into a state that's half the size by adding a new constraint onto the state, right? And now, the final boss of the lecture. It looks like there's two pages, but there's just concluding remarks on this last page. If the Pauly we're measuring anti-commutes with a stabilizer, then we're gonna get to see some more of these commutation relations. All right, so first off, let's say there's some old stabilizer that anti-commutes with the measured operator, I plus or minus P over two square root of P, ah, some probability, okay. This is gonna be equal to one minus a plus P to root P sine, right? This plus minus has become a minus plus, so it's not stabilized. And let me see, it's SJ. So there's gonna be, you know, if you're anti-commuting with at least one stabilizer, then one stabilizer that you anti-commute with at least one does not stabilize the state anymore. It's removed from the table. But let's take the product, SJSK, of two stabilizers that we will assume anti-commute, the, you know, the fresh state identity plus or minus P acting on Psi is stabilized by the product of any two stabilizers that anti-commute. So in order to figure out the new stabilizer generators, you make a list of everything that anti-commutes, you pick one, your choice, because it's the same as multiplying the stabilizers to generate a new group. You blame that stabilizer for all of the anti-commutation, multiply the remaining stabilizers with it until you get, you know, the biggest possible set of commuting generators, that set of commuting generators and the measured operator become your new stabilizer, and the single, you know, whipping boy stabilizer leaves the table. Has to go to the kids table. Now with that, we can, all right, do we have just enough time? I think we have just enough time to do one last circuit simulation. So we're gonna put it all together and do a five qubit algorithm in just a few minutes where, I mean, no offense to Catherine, but I think if we had tried to do a five qubit circuits in the first lecture, we would not have had a good time. 32 by 32 unitary matrices, we would have run out of board. All right, so let's do a circuit that Kiran and I both care deeply about. Muscle memory at this point. How often I write this thing down? I put a zero state in. I do a Z measurements at the end. So my initial stabilizer tableau is gonna have one stabilizer and then four logical qubits. So I'm gonna have I, I, I, I, I, Z. X, I, I, I, Z, I, I, I, I. Did I leave enough space here to X tensor X, right? Likewise, this X is gonna map to XX, this X is gonna map to XX, this one is gonna map to XX. And the Z, let's see. These Zs, nothing is going to happen to them because they commute past the dot no problem. The only Z that gets changed by this is the initial Z here, right? This Z stabilizer is going to become a product of Z stabilizers on these four qubits and this guy at the output. So I'll put, you know, maps under C-naughts to, I've got the corner of the board here. So I'll leave a little awkward space Z, Z, Z, Z, Z, Z as my one stabilizer. X, I, I, I, X, Z, I, I, I, X, Z, X, Z, X, X, X, X, right. And that's it, everything else is an identity, yep. These columns line up, yeah, do they make sense? Yeah, great. Now I'm gonna measure that Z at the end. That operator commutes with the one stabilizer that I have, so it's just gonna join the table as a stabilizer. I'm also going to assume that I got the plus one outcome. It is going to any commute with these logical operators. Oh no, I didn't cover that case. What happens if it any commutes with a logical operator? What you should remember is that from all the way back over here with this big Clifford that I'm using to prepare states, a logical operator is just sort of a stabilizer in waiting. So I do the same thing, right? I've, I multiply the logical group in order to find stuff that commutes with the method group and then I'm going to turn this into X, X, I, I, I, mm-hmm, mm-hmm, X, I, X, I, I, I, Z, I, I, X, I, I, X, I, I, I, I, I, I, I, I, I, I. This group is also generated by, right? So we'll notice that there are no logical operators affecting the fifth qubit, right? There's one stabilizer affecting it, right? And I can remultiply to do this. So this state is now separable. And what this means is I now have, right? I've, I've carried out through this circuit a measurement, a projective measurement of this wait for operator. And I've added and removed one qubit in so doing. And these are just the remaining, you know, three logical qubits that satisfy that primitive stabilizer code. Still distance one, right? You still have a bad time if you tried to use this in real life because, you know, you only have to do one Z in order to mess your qubit up. All right. And that is it. Now you know how to do everything that we do on a daily basis. Commuting Pauli's through Clifford's looking at the effects of circuits. Well, okay. We also let us know if you're interested. And with that, I think Kiron can take it from there and introduce the kind of things we're actually interested in. Right finger, right finger, not a right finger. Five minutes to change the video, let's see. Yeah. We've got a break, I'm going to do one more until I get to the end of this. If it's hard or if they don't trust you. Yeah, wait, what? When do you expect that you, you know, clear stuff? Well, I don't know. I do think that I, it'll actually, I have plenty of room to play with. So they're going to keep on recording or something? Like I thought they were going to like reset it. I don't know. This is the current schedule. Well, so. Yeah, so what, what happens is that the team is, the team being recording is available after some time. I don't remember if it's on schedule or something. And so they want to, so yeah, after they want to just change the degree slightly to the video, but that's just something that. I can ask them how actually, but. They're, they're not. So I wouldn't get one. You can just say that. Yeah. So, I could actually edit something. So I would tell what they, and what those are like. Um, I think I'm, uh, I'm not sure. So like, I mean, I mean, it's one of the... Yeah, yeah. But it's done. I think so. Well, I'm not sure. I don't know how it works. So, um, that would be fine. So, it's dependent on the use case. Okay, so... That's fine with me, but I would confirm that with a lot of the orders. No, I was getting some of the... It's just like... It's just like... If somebody walked in without any context. Yeah, yeah, no, of course. No, no, but every time I see a, like, snicker, you have to hold the quantum piece. If they ever learned we have a technique called magic states, they would totally... Yeah. This is much better than my tracing. Just kidding. Testing, testing. Yeah, so... It's basically the number of qubits. Zero, one, two, three, four. All right. One, two, three, four. One, two, three, four. The first row is, like, the logical... Is there anything that's in English with the first row? So, usually, we push just the stabilizers plus maybe the logical, like, just the logical operators, but he was kind of, like, keeping it fairly generic. I'll go over this again, you know, in a simpler, a different manner. So, hopefully it'll become more generic. Three fifties. Well, so that'll be, you know, hopefully at the beginning of a fence lecture. So you feel like trick the less than enough? Uh, wait, three fifties. Yeah, it's fine. It should be fine, I think. Testing, testing. I guess it sounds okay. All right, welcome back. We're ready to kick off lecture two. Oh, testing, testing. That works, okay, cool. All right, thanks, Kieran. All right, yeah. So, I'm Kieran Ryan Anderson. I'm over on the U.S. side of things. That was formerly Honeywell Quantum Solutions. So now I'll be talking about, hopefully it didn't scare you off with all the mathematics. But I'll be talking about the basics of QEC. So we kind of reorganized the schedule a little bit. So it'll be him, you just, you know, gave a talk, and then it'll be me, then him, then me again. I'll have a lot of pictures, so like transistors. It's not at the scale where, for the most part, you have to deal with quantum error correction. To some degree, you have to deal with, with your SSD hard drives as well as DRAM memory. They're kind of, you know, if the cells go out, or the DRAM's kind of sensitive now, everything's getting squashed down. They have to deal with error correction. But generally, you don't have to worry about it for everyday consumer stuff. However, for data centers, you start to have thousands or tens of thousands of processors. So that's where it starts to become important. It's common for servers to have the RAM being, using ECC RAM or error corrected code memory. So even common servers have error correction in them. And the main sources of error for classical systems is cosmic rays or decay from particles in the packaging of them. Actually, I think Los Alamos noticed an issue where they had these high-performance computers and they had a weird amount of error and it turned out because they were a mile above a sea level that actually being closer, having a thinner atmosphere was actually increasing the error rate. And as we approach exascale quantum computing for high-performance computers, this becomes really important. They have to start worrying about, or not quantum computers, classical computers, they have to start worrying about how to deal with errors due to cosmic rays. As we know, qubits are really fragile, just seem to compare what their error rates are compared to their classical counterparts. So back in the day with the ENIAC machine in 1946 where there was one of the first sort of actually programmable electronic computers, they actually experienced, if you can do the cac... for transistors. So that's pretty impressive. That's why your cell phones and laptops for the most part don't have to deal with error correction. Sometimes you get a blue screen of death and that's actually from a cosmic ray. Not always, especially on Windows. But quantum technology nowadays is at one fault per 10 to the 3... yeah, 10 to the 3 operations, or 1,000 operations. That's significantly worse than vacuum tubes and that's kind of sad. Although luckily these aren't directly comparable because we know that certain algorithms we hope have an exponential speed up compared to classical algorithms. So they're not completely comparable because one's far more powerful for certain algorithms compared to the others. But yeah, eventually, we're kind of living in the NISC era as people like to say, this noisy intermediate-scale quantum computing. But eventually, we want to tackle these currently non-tractable, difficult problems. So for example, breaking RSA using Schwarz's algorithm. You might need roughly 10 to the 12 C knots in order to do that. So that means you need roughly an error rate of less than 10 to the minus 12. Those are really small numbers and that's kind of hard to get. In hardware, there's physics. So you need something else. And kind of the main consensus is that we need quantum error correction. So as been mentioned before, quantum error correction is all about kind of spreading the information across multiple qubits or q-dits if you get fancy. The action of it is just doing quantum error correction. It's doing fancy identities, as Ben mentioned. Yeah, so you imagine that the user will supply some algorithm to our quantum device, which might be a hybrid system. Maybe it's a coprocessor for some high-performance computer. But in general, this gets sent down and quantum error correction will do its magic under the hood. And there's a variety of things that it needs to do that you might not be familiar with. Potentially like magic state factories, these Clifford and Poly operations on the logical level, like Ben mentioned. Logical memory and so forth. But 99% of it is quantum error correction. So quantum error correction, as I mentioned, it's about encoding information over multiple qubits. And basically the name of the game is to, once you have this system that you're encoded, you're using the qubits as kind of a substrate for this other system that's built on top of it. And because you're building a system on top of another system, what you can do then is play a game where you can start trying to detect these faults and trying to mitigate it faster than they can corrupt your higher-level system or your logical system. And that's the main idea of quantum error correction. It's effective. And now the screen went black. And that's fun. And now it's back. So experience and error. Luckily the system recovered. Also technically, faults are about the lower abstraction level, the lower system, while errors are about what the user kind of sees. So on the logical level. So often though, even in the quantum error correction community, some of those words are kind of used interchangeably. But yeah, errors are technically, a fault can cause an error, but it doesn't necessarily need to. It could be a benign fault. So for example over here, we see if your game's freaking out, the user's seeing something going wrong, some bad behavior on the higher-level, your logical level of the system. All right. Some of you might be familiar with some basics of quantum error correction. Some of you might not. So I'm going to do the boring thing and go through the repetition code. But it teaches you a lot about how quantum error correction works. And it's very simple. So it's kind of a good example. So the repetition code is really a classical code. We can apply it to our quantum systems. But on the classical level, you really just have to worry about bit flips. So changing between zero and one by, for example, cosmic rays. Often you kind of think of your bits going through some error channel that might have some problem. The whole name of the game, both in classical and in quantum, error correction is redundancy. So a very naive and simple way to protect your information is just to repeat your message. So this is called encoding, where we take our original message and we encode it into a larger system. So we, for example, on the classical side, we have a bit flip happening on the first bit. It flips it. But then we use a scheme called for our decoding process, so the opposite of encoding. So decoding is trying to extract the message that we originally intended to give. We use, for the most part, majority vote for this repetition code. So what's the majority of this code? It's zero. And so you'll be protected against any one fault or bit flip fault. And you can see here, once again, a single bit happens, a bit flip happens on any of the qubits. You get back the original message and that's great. However, if two bit flips happen that's not so great, then the majority voting scheme does the opposite message. So we'll get back one, for example, with this first example. So zero goes to one, one, zero and the majority vote says that the outcome should be one and that's not right. So we're not completely protected. We're only protected to a certain amount of faults. But effectively what we're doing is we're taking a channel that had a probability of p errors and changing it to p squared. So we're suppressing the noise by this bit flip code. And then of course, if that's not enough and the error rates are high, then we can just make the message we can suffer two faults and still recover the message because the three other bits are the majority and their vote is stronger than the dissenting two bit flips and so on. And in general, this length, the number of bit flips needed to change it from one message to another is known as the distance and that's used in quantum error correction as well. It's an important concept that is just a little bit lower than half. So d minus one over two floored. So you can apply this equation. You'd get one, two, and three of what t is. Yeah. And then in general, for the repetition code the distance is the number of bits you have but that's not true of general codes even other classical codes. And then we can see if we were to keep on taking this scheme out to infinity which is not something that you'd do, then we could suffer up to not exactly but up to half of our bits flipping and we'd still be able to recover as long as we have a little bit more than half that didn't flip than the majority wins and we get the message back. So this is basically what's known as a threshold. It's how does the family of codes perform as you increase the parameter d out to infinity. And therefore you know that as long as the physical error rate is lower than that threshold there's some large code in the family that can arbitrarily suppress the noise. Okay. So in quantum computing often what we do is we kind of basically steal ideas from classical computing or classical information theory. And that's what we're going to do now. However I didn't do the Q-bit thing that was a call on me to keep that in there but whatever in the title. Quantum is a bit different than classical because of quantum mechanics. And so it kind of holds us back in what we can do and it's kind of actually amazing that we can still do quantum error correction given all the limitations that quantum computing are correlated. You can also have leakage where it takes you outside of the Q-bit subspace and so on. But for the most part we only have to deal with poly errors and we can see we'll talk later about why that is. Yeah. Just mentioning that there's many factors why quantum computing is so noisy due to the environment, due to trying to finite precision in your ability to control stuff as well as potentially if Q-bits kind of talk to each other that's not great. Quantum computing is just inherently sort of noisy. If it wasn't that way then we'd see quantum effects all over around us. So we really need to isolate these systems in order to actually make use of the weird nature of quantum computing and quantum mechanics. We also have to deal with the annoying thing called the no cloning theorem. So that means basically for a general state you can't create a perfect copy. If you've taken quantum computing classes then this you would have heard about this pretty early on probably. But yeah so we can't just copy the like we did with the bits where you just copied made 0, 0, 0 and then 1, 1, 1. We can't just copy that. We have to do something smarter and different. We also of course have to deal with measurement collapse. So if we measure an observable it projects us to something that we've talked about. And it effectively collapse us to a classical state effectively for that observable. You'll be in a definite state. And that's not good if we're doing a general computation. You want to keep on having this quantum evolution so you can't just collapse it. That would be bad. So you need to figure out ways around that. For example measuring operators that don't collapse, don't learn information. There's other operators maybe you can use. And then annoyingly also it turns out there are no go theorems that clever ways around these no go theorems. And we'll talk maybe potentially briefly about those. Let's see. Yeah. Oh and also you have to because everything's so noisy you have to deal with the environment, the qubits, the interactions you have to make sure you do all this in fashion. You can't just constantly go to a logical qubit, sit around in that and then go down back to bear qubits, do your gates and back up. Because once you do that you kind of have a giant hole in your whole scheme and that's a place where your noise will happen and that just gets mapped back to your logical space. So you have to do everything in a coded manner. Then already mentioned these using Clifford's in order to go from one sort of code to another code so using these encoding circuits so this is the way around the no cloning theorem. So we're not going to copy the state instead we're going to coherently evolve it so the information spread across other states. So for example we start out with our qubit that's in some complex combination of 0 and 1 and then we use the C knot and this other C knot in order to spread the information. So we know with the C knot we have these set of rules we can either use, think in terms of how it changes the polys which we no longer have and wrote that in terms of these states. In quantum error correction we tend to think in terms more of the Heisenberg picture with the poly operators and whatnot. But we can see that if the control qubit is in a 1 then it'll flip the target qubit. So given that these states over here start out in 0 that means that this will only flip the other states and get a tensor product and that would be kind of like squeaky chalk. That would be sort of like doing the repetition code with copying but oh I went backwards but we're not doing that. We're enlarging the basis states. So now I'm using the bars again like Ben did now our logical 0 is 0, 0, 0 and our logical 1 is 1, 1, 1 but we're coherently entangled between these bigger states. The information is no longer stored in an individual state. It's spread across these three qubits. Let's see. Alright, so we also would like to do something with these states, not just protect them. And we also need to think about the logical operators. So a simple set of logical operators for these states, these stabilizer states is to think about the logical x, y and z. So logical x basically acts like a bit flip. So what's the logical... Oh, okay. I did my animations in a different order. And logical z adds a phase to our state. So how do we get an effective logical x for our logical qubit? We just apply x three times. We see that it goes from our logical 0 which is 0, 0, 0 to our logical 1 which is 1, 1, 1. So this is our logical operator. Logical qubits are just like any other qubits. They just have... They're just encoded qubits on top of other qubits. So these are just like... Or the weight one thing are equivalent logical z operators. And also I'll share these slides out so you don't have to rush madly to write all this down. Unless you want to. And just like y is equal to ix logical y is equal to i logical x times logical y. So you get something like this. For example, if you use the weight three version of the logical x and logical z. Okay. So how do we avoid measurement collapse? I already mentioned kind of hinted at it previously. We measure observables that commute with the logical operators. So if it commutes, that means that we can kind of measure these things independently without disturbing each other. So we see that if we measure the observable z, z, i or i, z, z both of these things commute with these logical operators. I mean, obviously it commutes with the logical z operator because it's made of z's or identities. But the x operator we see that it anti-commutes twice which is the same thing as commuting. And people understand what I mean by commuting and whatnot. See a lot of heads shaking. Good. Okay, cool. And so we can see also if we were to measure the logical zero or logical one, we get a plus one out for both of these observables. Also for any superposition of these logical operators, you'll get a plus one out. So whenever we do the encoding circuit we're preparing a state that's a plus one eigenstate of these observables. These stabilizer generators like Ben mentioned or there's sometimes called whether if we measure these observables will they project us to something else and they won't. So this state is good. So Ben already showed kind of a circuit like this. Here is I won't go too deeply into these circuits in my second half I'll basically go through this in a different way. But just to give you an idea of how might you measure these observables what you often do and this is not necessarily the only way but you often include an additional encilla and then entangle it using CNOTS or some other operation. So this is the circuitry that you might use in order to measure this observable ZZ. And then we'll later explain that in more detail where that works. And here's more circuitry I'll quickly go to more an abstract version of this but so here we see the encoding circuit at the very beginning again that produces our entangled alpha times 0, 0, 0 plus beta times 1, 1, 1 state. And then we use this circuitry to measure the observable ZZi and then we measure this observable or we use this circuit to measure this observable here. So you can we introduce these two encillas or could reuse the encilla if we can reset it. We can see if like an X error happens like an X error happens been mentioned that if you have a CNOT if an X occurs on the control then it'll propagate forward and branch like this so you get an X, X and from measuring what should have been a plus-plus outcome to a minus-plus outcome. So we don't measure the state directly but we measure observables that can help us infer what might have happened. There could be other error combinations that could lead to the same change in these measurements. Sometimes these measurements are called syndromes kind of like you have a disease and you want to know the syndrome or the symptoms that the person is experiencing but yeah, these help us infer what's going on. And this is exactly the same picture so if we put everything away and ignore the circuits so we see if an X happens it will anti-commute with this observable Ben already went through this how like the new stabilizer gets updated by doing USU dagger so we just apply X to this ZZ operator and so it anti-commutes so it just flips the sign of the measurement and that's just showing if you had like two X's it will anti-commute twice and so you'd still be plus one. So if this X happens it anti-commutes with this observable so it flips the outcome and so we get minus one plus one so I'm just showing two different ways that you might reason about this sort of thing. It's probably easier to just think in terms of diagrams rather than worrying about how you might implement it. If an X happens in the middle it'll anti-commute with both of the observables and it'll both be minus one so that kind of pin points to okay this is the most likely error it's most likely that a single weight error happened rather than maybe a weight two or weight three thing happens that triggers this and likewise you know kind of the reflection of the previous example if you apply an X to this, keep it over here it anti-commutes so you get plus one minus one this set of results given the most probable thing is that single error with the faults so in quantum computing that's another thing, everything's probabilistic that includes quantum error correction so you always have a probability of fixing things you never can know for sure whether you did it or not because you don't know what the environment did. This is showing just an example where okay if two X's are applied to anti-commuting twice with that observable is overall a commutation it commutes but it anti-commutes with that left side and so if you were to look at this table, oh sorry I've switched from in the table I'm using zeros and ones so zeros for plus one and one for minus one observables so just a different way of encoding this stuff so if you're using bits you usually use zero to mean plus one and one to mean minus one so if you look up in this table this minus one plus one which is equivalent to one zero it says to correct the qubit over on the left hand side if you do that, oh you just applied to logical operator so it flips the outcome that's not great so yeah so that's the problem that you don't know exactly what the environment does sometimes the environment does the less probable thing and you end up screwing up the system but there's no way to tell the difference and then of course if a single Z happens, a Z fault happens then it adds a phase it's a logical operator so everything commutes with it you don't even get a sign that something might have gone wrong and so that's not great this code is a classical code so it can only deal with bit flips or you could do a version where it only deals with in a way that is non-destructive to the qubits because we want to make sure that we don't project it into a classical state when I keep on doing computations with it but in the end if you want to do effectively the logical version of a measurement that's destructive then we can just measure the bits in the Z basis and it effectively collapses you to the classical version of the repetition code I mean this was a classical version but in a quantum system but this will make it more classical zeros and ones it will collapse you to classical repetition code but you can still use the same sort of lookup table for the classical version this time by taking the parity of those bits taking the parity of bits is the same thing I don't know taking the parity of bits is the same thing as taking the XOR inputs so the XOR is so if we have input A input B and the output it basically gives you the parity so if both of the inputs are the same then you get a zero if either the inputs are different then you get a one this is the same thing as the parity if you get an odd parity it says yes if you get an even parity it gives you an even number zero so you can take the parity of the same bits here and here associated with those qubits and use the same here we get one zero so that says flip the first bit here you get zero zero zero and then you can do the majority vote if you want or you can also XOR the entire string and you get zero and so on so you can these rules turn out to be effectively the same thing as majority voting it just kind of encoded in a different way so you could have used this instead but they're equivalent oh and there's the table for the XOR stuff that's great plenty of time so now the shortcode and this will be the last thing really that I'll cover so it's all been pretty basic introduction to how common error correction works as the title said so we need to deal with X, Z and Y noise well Y is a combination of X and Z so really if we can deal with X and Z both then we can deal with Y so we only really have to worry X and Z generate Y if you think in terms of the poly stuff that Ben was talking about they're generating set for the single qubit poly group if we wanted to deal with Z faults then we could have just done the thing where we take the encoding circuit of the bit flip repetition code and we just slap on Hadamards because Hadamards change you from the Z basis to the X basis and then the observables that you would measure would be XXXX and that would allow us to deal with the phase flips however X's commute with these observables so we can't detect them to go into this I think Ben might dabble with this in his next talk but there's more explanation in both the QDC textbook that was edited by Ladar and Brun as well as Nielsen and Chuang but effectively for any unitary you can kind of think of it as a complex linear combination of the poly operators just like states are complex combinations of the basis states these poly operators are effectively a basis set for unitary operators and so once again just like if you can deal with X and Z errors then you can deal with Y well if you can deal with X, Z and Y then effectively this is kind of like the different histories of the possible errors that can happen and you can deal with any unitary errors so you can deal with arbitrary unitary errors it's kind of hand wavy but there's more proof in those books I don't really feel like going through all the math okay so how do we deal with both X and Z so Shor in who's famous for the Shor's algorithm in 1995 came up with this Shor's code I think Steen might have came up with his the Steen code around the same time I think maybe it was accepted to a journal a little bit later in 1996 but they're all dabbling in the same sort of idea people were very like kind of poo pooed computing because it's because of the inherent and noisiness of it back in the day and I said this will never work and then the field of quantum error correction was worked out slowly but they're able to find ways of actually doing quantum error correction so that kind of dealt with those naysayers you just slap on Hadamards at the end so we have the encoding circuit right here for the phase slip code and then for each one of those qubits of that original encoding circuit we then apply the bit flip code so we take those individual states and we then encode it into the bit flip code to kind of see that better so here's our original logical one and logical zero states here and then we can see because of this to do the plus version logical plus version of that you'd have this state because in general like the Hadamard I was writing the Hadamard Hadamard on zero is equal to the plus state which is the same thing as the square root of zero plus one which you've seen a bunch of times maybe I even wrote it well actually maybe I did in a bit but anyway yeah this is just you know taking this thing and substituting zero zero zero here and one one one so that's the logical version of the plus state and the logical version of the minus state so if we you know started out with this state then let's see actually I have some okay yeah if we started out originally right here this is just the normal bit flip encoding circuit right here if we ignored the Hadamards and then we apply the Hadamards we get plus plus plus so if we're trying to encode zero we go from zero to zero zero zero and then applying the Hadamard you get plus plus plus so that's there's a definition of that so that's kind of the red box there and then if we apply these encoding circuits here then it will just convert these pluses over to to this state so the three copies of this state which is nation now it's kind of funny and the quantum air correction book when it's talked about Dave Bacon is a famous QEC person I had this quote I guess he found it really beautiful which is funny that it's in a textbook anyway so here's the logical operators and the stabilizer generators it's not really important you can kind of see that it kind of looks like the repetition code so we were measuring these sort of observables for the bit flip repetition code so it kind of looks like a bunch of bit flip codes here and then with some x stuff down here and then we can see if we so these row these columns represent qubits and the rows represent the individual stabilizer generators and so if we were to apply like an X to the first qubit then we can see that this stabilizer generator that we would measure for the short short code anti commutes with it so it's able to detect an error has happened likewise for Z it anti commutes with this stabilizer generator to measure why anti commutes with both of those things and so on anyway you can do all combinations and you can see that it uniquely like sets off different stabilizer generators for different combinations of them to detect these things and uniquely identify the weight one errors and therefore correct it so it's a it's a distance three code because it can detect one error if it was a distance five code it could deal with one fault if it's a distance five code it could deal with two faults and so on anyway sort of ideas apply we need to not just measure these observables and come up with corrections but we need to do some encoded operations you might think about how does that happen and we might discuss that in later talks how do we deal with faulty measurements so if you have some probability of the measurement going bad what does that look like do you need to change how you measure things and how you approach this and so far we've only been talking about noise faults that have been injected before the parity measurements that we've been making these observables that we've been measuring so we can see that they can deal with those things but what happens if we think about noisy operations so if the faults are injected inside that circuit what does that look like and we'll talk a bit about that as well so stay tuned for Ben next hour are there any questions? let's take a break for a few minutes then I found it really interesting and kind of funny you mentioned the book I made oh nice yeah yeah it's a good book I wrote that yeah it can be a little chaotic though since it's about buy a bunch of different authors but yeah it's a good resource at least there are other internships probably too that you might want to consider we do have a master's someone who got a master's doing quantum error correction so it's not necessarily a requirement to get a Ph.D. but it's usually largely unexpected but it's still possible probably and after the last lecture we will open the three and the two things in various times for error correction oh my god you're fantastic I will you will have just to stop the stream on YouTube and the recordings nothing else we will start we will start with because we are doing streaming so I will prepare the streaming bye we are now and later I can hear you will be in in a you will know you will see exactly which one could be I mean it's possible that maybe I and stop in zoom not now because I have to change some configuration in the next post can you stop streaming and then assume the most important thing is I was able to log in now just to stop the stream stop the recording and close the room and this button here just stop here you will find the stream so they start out by definition plus one icon yeah yeah yeah yeah you can see if you were to look at you know if you were to measure the z both of them would get a plus one and so you get as well as for the z so you get two minus one plus one so you can see and so yeah as long as it's the one you want it will be both in two two yeah yeah we're playing so the sorry yeah I mean we have mathematics physics probably this here I mean I can I was supposed to end at 540 15 sorry 1540 so it should be 1550 that you're supposed to start I think 1550 is when it's supposed to start so we are in time my own simulators they're really fast so you can look up my stuff but the basic thing that a lot of people use is CHP by so this is and I have a more complicated version based on this but it uses that to represent these things so you don't longer represent because usually you only have a few ones either in the column direction yeah so effectively I have a version that represents things like row wise and column wise varsity and then you figure out how to update these appropriately in an optimal way you update each other with each other and it gets even faster yeah I have two different lists of stabilizers that have a qubit one of them has a sense of these and then it lists the qubits and this one is the qubits and it lists the stabilizers the qubits that have so it wrote out like the data qubits so you can have both a C++ and a hyphen version but this will link you to the github that has the code effectively I was trying to implement CHP in Python and it was super slow for large codes I kept on messing with it and I accidentally I didn't realize I was doing something smart but whenever I analyzed the complexity it was actually much faster so sometimes writing things in Python stumble upon better yeah I don't personally but it does seem to be sort of a useful thing for quantum air correction at least for yeah for at least in quantum air correction it's useful it's probably useful for like lattice surgery to optimize that and stuff so someday I want to learn it just too many things to learn you have both the dodo book and the new one that they put out but I haven't really read it so there's a giant textbook with the dodo on it and then there's it's by what's his name the guy in our company Bob yeah both of them so there's a newer one too that's supposed to be aimed at like high schoolers and stuff okay a little messing with the mic alright and we're ready to go yeah alright welcome back for lecture three and we have Ben Krieger again nice alright so is it enough to have a code that can just detect and correct Pauli errors it is and I will coherent rotation by a Pauli that is itself correctable right so maybe some of you know already probably most of you know that like I can have you know up theta equal to some exponential of I times a palli times theta some angle right this is going to be some unitary operator and you can do a bunch of Taylor expansion and knowing that the square of a palli is the identity to get that this is cos theta times the identity plus I sin theta times P so it's just some linear combination with complex coefficients of operators you know one of them is a palli what would happen if we applied this unitary to a stabilizer state that was stabilized by some stabilizer s and then we measure s and we assume that s anti commutes with the palli because spoiler alert if s commutes with the palli nothing happens right so let's set up s times cos of theta identity plus I sin theta P acting on psi do we even need this thing ah no sorry we're going to project into one of these two spaces right either the plus one or minus one projector right so we expand out this is going to be equal to cos theta I plus s I plus or minus s over two times psi plus I P sin theta I minus or plus right because these anti commute over two sin right so a plus one outcome then this cancels out completely and the resulting term well okay so let me let me write this down plus one this will cancel out and you'll get psi once you normalize right the square root of pj etc and then minus one you'll get some term sin squared theta P acting on psi so whenever you measure a stabilizer that anti commutes even though there was some coherent rotation right the error is not just some palli that you apply the projective measurement collapses you into a space that's spanned by vectors that are palli's acting on the original state and you can just continue with your regular error correction formalism from there the full proof that like you can correct arbitrary dynamics is more involved and it even involves non-stabilizer codes and if you want to know more about that you should look at chapter 2 of Bruin and Lidar the book they edited yeah they well okay I would say they wrote the book on quantum error correction they didn't write the book they edited the book but it is also called quantum error correction right and there's like a teaser for the more general applicability of these stabilizer codes okay so all this is to say that it is enough to correct palli's the remaining question should then be you know how many palli's can you correct with a certain number of qubits oh yeah so Kiran showed in the last lecture that we can do with 9 qubits it says nince qubits in this pdf but that's a typo can we do better the one way to answer this question is by imagining that each possible syndrome right every list of plus or minus ones or zeros and ones that you could get out of measuring your stabilizers corresponds to some unique palli error which is just the weight of a logical operator the minimum weight right it's the number of things that have to go wrong before random dynamics have flipped your qubit completely and you have no chance of detecting it so you want high d high k low n for a code with k equals one right so if you have an n1d code then there are n minus one stabilizers you have right and the number of things that can go wrong the number of palli operators is three times n right x y and z for each n plus one for the identity right the identity you know if nothing goes wrong that has to give you a unique signal that's not the same as some error so you have to be able to successfully get that nothing happened oh the number of bit strings that we can measure is two to the n minus one right so the number of syndromes has to be greater than or equal to the number of errors that can occur which is three n plus one whenever we have a discrete equation like this I'm not going to try and do algebra I'm just going to start plugging in values right well okay it doesn't make sense to have n equals zero what about n equals one you know one is not greater than four what about two two is not greater than whatever this is if you keep going you'll note that the smallest value for which this equation can be satisfied this inequality can be satisfied is five n has to be five or more and as it turns out there is a five one three code that that you know that it just barely makes these equations got 16 possible stabilizer syndromes and there are 16 things that can go wrong because three times five plus one is equal to two to the four and here are the stabilizers x z z x i s five and its logical operators are well there are three there are wait three logical operators but they look ugly so I'm going to write some first using ket notation before the stabilizer formalism was invented so there really were people out there messing with nine qubit kets luckily you know Daniel Goddusman saved us from all this and we don't have to do that anymore now we can add all kinds of overhead and still say that we know what we're talking about again double-edged swords oh yeah and we mentioned that the number of correctable errors is roughly half the distance right if you if in this you know distance three code we were to have one error we could assign a unique syndrome to it but if you multiply that error biological you may get a wait two error that has the exact same syndrome so you say is less likely but less likely is not impossible so there's still going to be some finite probability of failure no matter what code distance you use the hope is that we decrease the failure probability exponentially in the amount of overhead alright so that's one code Kieran has used it did we do anything with the short code not yet but the rest of the talk is basically going to be about what kind of codes you can construct and how easily so mm-hmm should I talk about classical coding theory now no first we're going to do so that single check right so it's got N-2 logical qubits right which is a very large number of logical qubits that's like the maximum you can have but it's only distance two and you'll see why it's stabilizers are x I'm going to use the the out of where that error happens so you can tell that something has gone wrong but not where and that's why it's distance two right so there's one construction these codes get used all the time and they get used to construct bigger and better codes so it behooves us to know about them okay now we're going to get into what Kieran was talking about concatenation concatenation has got to be my favorite code construction I mean it generalizes it does all kinds of stuff there are rules that once you learn them you can break them so what we saw Kieran do was take the encoding Clifford for a single code here it's got one input wire right and what comes out is some you know three qubit state and then we take all these out here we encode each of those qubits into some new code and with this nice tree-like diagram we can start to synthesize bigger and better codes that way when we do this okay let's imagine doing this to two arbitrary codes where's that here we go so concatenation is going to be used to take a low level code which has N, L K, L and D, L right for low level and a high level code with predictably you know N, H, K, H or qubit you need a low level block K, H, K, L let's see because for every physical qubit of the high level code you need an entire block of the low level code so if your low level code encodes like six qubits and you want to encode into a 5.1.3 code then it's not enough to take five qubits out of the six qubit block and put them into the encoding circuit because your probabilities will get messed up and it will not work we can get into it later in the Q&A if you like what you have to do is get six blocks of the 5.1.3 code that you want to use like this guy and then take one logical qubit out of each of your six logical qubit blocks and feed those in independently and that's basically to ensure that in order for something to go wrong at the high level it has to go wrong on separate blocks at the low level actually okay we can see the I can see people are getting confused so I'll give you the nucleus of this so you have X1 X bar 1 is equal to XI XI X bar 2 here is equal to let's see let me go XXII and XIXI okay if I was to multiply these operators I would get IXXI so if I want to flip one of the logical qubits then I need to flip two qubits if I want to flip the other logical I need to flip two qubits if I want to flip them both I need to flip only two qubits I don't have to flip four right so things go wrong at the logical level with different relative probabilities than they go wrong at the physical level and so if you take at least in this naive construction for every physical qubit of the top level code and of course those physical qubits are replaced by KH logical qubits in every block so you wind up with KH KL ah the benefit of putting up with all that rigmarole is that you get DH times DL right so when you want to construct higher distance codes something with distance bigger than 2 or 3 you can start taking products now I wonder if I should draw the circuits I will leave figure 1 for people looking ahead in the notes here as soon as I post them but I will write out the stabilizer of the 422 code concatenated with itself right so if I take this 422 and concatenated with itself I wind up with a code that was in a paper in 2022 I think the right card, the probably one right card distance 4 codes paper is now state of the art it may seem confusing but you're now doing legitimate up to date research alright stabilizer of this code let me start at the top for some reason it's going to be XXXX Z Z you can see that even though it's not exponential quadratic amount of work can still be a lot also a lot of your palities wind up being sparse right affecting relatively few cubits now there are 8 logical operators to worry about oh actually sorry we're not even done with stabilizers yet because this is just 4 independent blocks of the 422 code you can see that because these columns have identities below them identities in the rest here this is completely untangled blocks this is distance 2 if I apply 2 errors here I flip one of these cubits and I don't have 4 cubits yet I have 8 so 4 logicals I have 8 logical cubits right now and what I have to do in order to finish the construction of the concatenated code is write the stabilizers of the top level code in terms of the logical palities of the bottom level code ok right so this is like you can do whatever you want with these logical palies they act just like the physical palies one thing you can do is take tensor products of them and make stabilizers so luckily I have prepared this earlier so here's z z logical operators I can sense the boredom x4, ix, ix alright have I done all of this for absolutely no reason no this is all to demonstrate that even though we've gotten sort of fully up to date with the art techniques in the first lecture constructing these tableaus it gets cumbersome pretty quickly you want to get a computer to do it if you can this is a hackathon you can maybe try that in a hackathon if you want to also this is why you often see quantum error correcting people work on little esoteric diagrams low operations like the ones I do with my clumsy human brain and hands make sense to do quadratic speedup we can draw individual blocks of this 4-2-2 code let me put cubits on these little circles now I have 16 of them in total and if you stick them on a square then they divide naturally into four groups of four I can also draw well okay so I'll say that there's an sx and an sz hosted on each of these squares and you can see obviously that it's four cubits with two stabilizers therefore it's got two logicals you can even interpret the logicals geometrically you can put an x-bar here and you know that the anti-commuting z-bar has to go over here then you can stick the other x-bar up at the top or you can multiply it by this stabilizer so that it overlaps on both cubits but because it overlaps on both cubits it commutes you can stick a z-bar over here and you can begin to say face like stabilizers and my logicals are on the edges and you can start reasoning geometrically like for example when I want to produce a weight four stabilizer on these tiles out of logical operators I can use x-bar, x-bar x-bar and x-bar so I can have vertical x's and create this sort of octagon in the middle and I can have z-bar, z-bar, z-bar, z-bar and create a little tile that crosses it and because these things overlap on four cubits they also commute and I can also do because there's an x that I can bring down here x on this big octagonal tile and a z on that big octagonal tile and that completes the stabilizer group which is a lot easier than writing out a bunch of identities bar two and z-bar one so z-bar one and z-bar two commute because where they overlap is both z's but the other ones intersect on points and I will also have logicals going down the sides of this lattice right z-bar three and x-bar four and x-bar three z-bar four much more succinct, much more compact notation, you can see everything is length four if you wanted to you could check out this is something that we're in the middle of at the office at the moment if you permute these cubits there are some permutations that preserve the stabilizer group if I were to pick this entire lattice up off the board flip it and put it back down you wouldn't be able to tell the stabilizers would look the exact same but some of my logical operators would move around this thing would move down here this would move up to the top and you've got to wonder about just by permuting the cubits as you can do with really high fidelity in an ion trap there are more pros and cons to the calculation oh okay one of the pros is that if you're in the scenario where you measure stabilizers perfectly and you learn some syndrome then there's a decoder where you basically decode the low level code first by brute force adding up every probability of every Pauli and assigning them into groups and speed up to the next level and you basically iteratively decode and you basically iteratively decode this gigantic code without ever having to consider all of the probabilities of all of the six cubit Pauli's it sort of divides the decoding labor for you and you can get an optimal statistical decoder that gives you soft decisions there's a paper from 2006 I'm in charge of the projects one of our projects is replicating the results of that paper so I mean research is 2006 but you know what you can do in a week cons of concatenation why don't people use concatenated codes all the time I do want to use a distance 3 code because you don't have that many ions in your trap so you can't afford lots of overhead and you want to see how well you can do it low distance which is still pretty well also as you add more layers your distance grows exponentially but so too does the number of cubits and so it's not very long before you're dealing with tables that you can't write down diagrams that you can't put on the board and things get really cumbersome and so we're going to go to even more formulaic code constructions so instead of beginning with we're going to the next section now I will do one of these so CSS the CSS construction for quantum codes this is another one of the pre-stabilizer constructions that I'm going to do in the stabilizer formalism just because it's easier but if you read original work from the 90s like the guy takes a classical error correcting code and you know puts a cat on it it's like nuts speaking of classical codes I now need to do classical coding theory in 10 minutes there's a lot to classical coding theory we're going to do a very small but very popular subset which is linear codes over f2 classical code typically has a parity check matrix called h and a generator matrix called g generator such that let's see in this lecture h times g of t will be equal to 0 the rows of g will be your code if I took 1 1 0 I would have 1 plus 1 which mod 2 because we're in f2 is 0 and 1 plus 1 is again 0 so this vector is also in the kernel and those are the code words so this would be h and then my g g would be something like 1 1 1 where I think you're allowed to leave the all 0 string implied at least that's what I've done in the rest of these notes there are a lot of more efficient constructions for distance 3 classical codes I will those in the know already know which one that I'm going to pick it's the 7 4 3 Hamming code and this okay there's a clever trick being used here if I want to know where an error is right and I'm going to multiply this parity check matrix onto the error string right why I should mention that if I if I take some error right and I add some code word and then I multiply by the parity check matrix right so these are each vectors this is a matrix it's all over bits the well okay this is going to be h e plus h c h times the code word code words in the kernel of the matrix right so I'm directly getting the syndrome of the error right no matter what code word I was trying to transmit which is a property that will turn out to be very convenient later let's imagine that I want h e to be distinct on every weight 1 e that I could put in well okay so if I take some matrix and I multiply by 0 1 and 0 and then wherever this is I'm just going to get out column J so if I want to make a distance 3 code classically 1 0 1 1 1 0 0 1 0 1 1 1 and there we have 7 4 3 which is another NKD tuple but there's no extra line here that denotes it's classical code and this is from Richard Hamming great why am I using this code because it has an obscure property also let me bring this in accordance with the notes by flipping the matrix front to back H. Hamming G of the Hamming code so it's got 4 orthogonal non-zero code words which means you can get a total of 16 messages in there and you can determine whether given any 7 bit noisy string whether it's equivalent to one of these weight 1 error just by multiplying on this matrix cool but these are classical codes how do we promote them into quantum codes Kieran showed us how to do it with repetition codes and it turns out that that generalizes to any classical code we're going to start with these parity checks and when you multiply them on with some error there's a question where does the error overlap with the parity check and if you start multiplying now we can detect X errors that occur in some arbitrary location for example iix iii you're going to get a syndrome because it anti-commutes with this check and this check but not this check because there's an identity here you're going to get a syndrome that's equal to the classical syndrome you're going to get a classical bit flip on this column here you could do the same thing in the X basis these stabilizers commute is it guaranteed that they commute if I was just to take some random matrix of 0's and 1's turn it both into Z's and X's and then stick them on top of each other with the stabilizers commute necessarily sleeping guy in the fourth row do you know sleeping guy does not know but the answer is no in order to get these stabilizers to commute the code has to be what's called dual containing you'll notice that the parity checks of this code also appear as code words these vectors are not only in the parity check matrix themselves but they are in the kernel of that parity check matrix dual container in the kernel of that parity check matrix dual containing is a very obscure property of classical codes but we can see that it's related to making the stabilizers commute there are lots of papers where people construct CSS codes this by the way is how you construct a CSS code you take two classical codes that are related by each one being in the dual of the other and then you translate one of them to Z stabilizers translate the other one to X stabilize stick them on top of each other there are plenty of papers where you construct CSS codes and they always do some weird obscure stuff that you can never tell why it's happening normally why there's one vector you know one code word of this code that does not appear in the parity check matrix sure enough it becomes a logical right so your logical X bar right or your Z bar right these become X X X X X X X X and Z Z this was not as much of a mess as this thing and these are also not minimum weight but they're very symmetric you can see that if I multiply these two together I would wind up with some weight three X right and likewise I have some weight three Z that I can cook up and they always in a commute right and this winds up being a seven one three code right so you're not using as many cubits as the shortcode that's nice and it gives you access to well I mean it gives you sort of difficult access to a decades long literature of classical coding theory there are lots of tricks you can do there are like product constructions how to make the stabilizers commute we're only going to do one trick because it's the one I understand as to how to make the stabilizers commute and that is homological and topological codes anytime you're using a lot of words that end in logical that's how you know you're doing you're doing real science what is the difference between homology and topology I don't know all I know is you get good codes actually according to like the orthodox definition of what good codes are these are not according to like the orthodox definition of what good codes are these are not even good codes unless you use hyperbolic space but we're going to do all Euclidean let's consider an arbitrary graph I don't mean a graph like a plot of a function I mean like a diagram you would make of a network right so it's got vertices it's got edges here's E1 there's another one it's called E2 there's some other edges coming out from every vertex and stop at one of the corners of the cycle it has an incoming edge and an outgoing edge so if I were to put qubits on these edges and I were to put X stabilizers on these vertices and Z stabilizers on some of the cycles oh boy they would commute pretty weird pretty weird but we will do any weird thing in order to make the stabilizers commute perfect what is the N of this code we don't know what is the K no clue what's the distance oh boy but I've made the stabilizers commute you have to add a little bit more mathematical sanity in order to be able to say what the N, K and D of such a code are and here's where I run out of material early and we go into questions but I'm sure there are going to be questions oh let's erase this garbage we're going to do another code that has like just as many qubits but we will not have nearly as bad a time writing out stabilizers because we're using topology let us imagine that the graph we select is going to be a square tiling of a torus there's a bunch of algebra that I should put here but this diagram the torus code is a special type of homological code homological codes are a special type of CSS codes so you can see that we're drilling down from the general to the specific and the torus code is more or less state of the art there are lots of groups working on the torus well okay on relatives of the torus code that you can cut and stretch out onto one sheet so you can put everything on a single chip but the torus code is close enough okay here's how you draw a torus that's you know if you go to like with X stabilizers that I wait for on these edges right X X X X and we're going to wind up with where have I got a nice square here's a nice square so sometimes you see these called plaquettes or faces or tiles I will have Z Z Z Z and sure enough squares intersect one of the vertices that always hits two edges now the let's see I think the homological topological part of it is where you say not all cycles in my graph are going to be Z stabilizers only the topologically trivial cycles right that that don't loop around the torus right you can imagine if you were trying to pick up a kettlebell and you try to like palm it it wouldn't work but if you grab it by the handle that works yeah so if you if you grab around this let me do dotted line through here right so loops around here are going to be logical operators and there are partner loops that I can't draw in 3D but I do know how to draw in 2D so I'm going to do one more diagram and then we're going to call it a day well up to questions let's draw another torus it's actually my favorite way to draw torus so again we will have X stabilizers by the way if Pac-Man walked off this edge here it would come back on here and if it went over here it would come back on over here that's what makes it a torus right you can imagine these different cut edges are wrapped around so they fuse together I again have Z stabilizers here and you can read about this in chapter 19 of LaDar and Bruin but Pac-Man is walking back over right if I put another X here the thing moves again and if I put another X here this moves over here and cancels with this right so my logical operator is given by sort of like walking between sites of the torus right and I have this length for topologically non-trivial loop that goes all the way around one of the hands likewise if I put Zs it activates its vertex unless I put another Z Z Z that wraps all the way around and I can do the same thing on horizontal edges Z Z Z Z and X X X to wind up with a in this case 16 to 4 torus code but in general N squared let me go D squared to D because you can stretch the dimensions of the torus just tie a bigger lattice and get whatever distance you want 3 4 5 6 7 8 9 very granular and the stabilizers always low weight so they're easy to measure with circuits and that's what makes this code you know a nice high threshold option for near-term memory experiments and state of the art stuff like that so given that I will end a little bit early and we can take 10 minutes for questions or whatever other topics you are interested in in quantum error correction the crowd is bummed now these people are ready for dinner oh my god gonna have to wait a little longer but before we have dinner let's have some questions long story short with CSS codes there's a transversal C knot so if you can run a C knot that means qubit K in block one and qubit K in block two and you do that for all K then as long as you've got two blocks of the same CSS code then all of the logical qubits become in block one become entangled with all logical qubits in block two and that's that's for block to block entanglement if you want to generate entanglement within a block you can do that by hook or by crook so there are some codes where actually okay I should have got this paper done before I came here but we have a paper coming out in a little bit where we have a code on eight qubits that sit on the vertices of a cube and if you flip the top face it does a logical C knot between two of the logical qubits so that kind of thing happens sometimes there are also gates that you can do in well okay some people call this quantum computing some people call this lattice surgery although it doesn't always involve a lattice but if you can measure projectively a joint logical operator then you can project the system into a subspace well okay and if you begin with an extra stabilizer that you don't need you can measure always measuring something that any commutes with your current stabilizer and gradually your logicals become different so you can do a logical Clifford by doing these repeated anti commuting measurements and you can do that within a block as well provided that your code has reasonably low distance compared to its stabilizer weight otherwise you're going to have to use really ugly circuits that propagate errors in bad ways and it won't work but for a distance four code with weight eight stabilizers it ought to be fine thank you and sorry one more question so the fact that the number of logical qubits here is constant right there's always logical qubits because there's only two ways you can go around the whole of a torus in order to create families of codes with what's called constant rate where the number of logical qubits is in a constant ratio with the number of physical qubits which is a lot better than having just a constant number of logicals you can use manifolds that have hyperbolic characteristics right so they're sort of like lettuce where they get very curly towards the edge because most of their bulk is near the boundary and well okay lettuce is not closed up into a torus but if it was then you would observe that you need many holes in order to connect all of that boundary to itself and those many holes produce many logical qubits let's see what we're talking about symmetries and how it plays a role in finding these different stabilizers yeah our hypergraph product codes like really symmetric I don't think yeah I don't know but yeah symmetries also play a huge role in finding logical gates so if you want to show that there's some logical Clifford or non Clifford like advanced symmetries other than just you know the polygroup and the Clifford group can come into it for sure thank you other questions oh you make me run I was just wondering is there any proven I guess like we do at the office you wind up finding that these codes are like equivalent to each other under certain transformations so for example there's a well okay so there's a way to construct the steam code from a 5 qubit version of a cut up torac code there's a construction of the 5 qubit code from the torac code where instead of some people are crazy and instead of putting the boundaries here and here at 90 degrees to the lattice they'll like they'll make a grid that's sort of like this and then put boundaries like here and if you do it right I haven't done it right but you can arrange it so that these four corners each have an edge coming in and then there's one square in the middle right so it looks kind of like this and that gives you a code with 5 qubits that's still topological and it turns out to have the same stabilizer group as the 513 code is this important? Nobody knows but yeah should we be using CSS codes or non-CSS codes hard to say CSS codes have in general probably a little bit worse rate and the influence of that property on doing actual error correction in practice not well understood but yeah CSS codes tend to have more logical clifards so they have logical C-nauts from block to block that you can do transversely non-CSS codes don't always have those so you wind up having to use more awkward and clumsy constructions when you're trying to get a logical C-naut you can do phase, the remaining clifard nobody's favorite clifard as long as your generators wait 0 mod 4 with CSS codes so you can get all clifards transversal and transversal gates are extremely reliable especially when you're talking about transversal single qubit gates single qubit gates have huge fidelity and if you need two or more of them to fail in order to produce a logical error when you're writing out a logical computation and you have transversal single clifards you can more or less ignore them you can act like the error rate is zero for the purpose of calculating actual failure rates in the device so those are all arguments in favor of CSS codes yeah and then I guess the argument in favor of non-CSS codes is you can save two qubits but I mean you'll notice alright 513 beats 713 but 1644 beats 513 because I'm on average using four physicals per logical and my distance is one unit higher right so if two errors occur I'm going to be able to tell and it's like, you know, Donald Rumsfeld unknown unknowns and known unknowns unknown unknown is a lot better than an unknown unknown so if a way two error happens in a distance three code and you think a way one error happened then you correct and you put in some unknown logical operator you've sabotaged the rest of your computation you might be doing a million more gates but you've destroyed your state very early on and it's all wasted effort you have no idea with an even distance code if two errors happen then in principle or with a distance four code if two errors happen in principle that gives you a unique signature you can say, ah, I don't know what to do it's like getting an error message rather than having some thing that you think is not a real error you think is correctable let's start again or at least post-select out those garbage runs from whatever subsequent computation you're trying to do on the output and that can be used to increase like makes sense thanks there's also a lot of people that will use codes that are not quite CSS but are locally Clifford equivalent to a CSS code so if you see like Clifford deformation normally that's being applied to a CSS code to make the stabilizer something other than all X's and all Z's but you still get the same divvying up of error correction gadgets and you still get the same logical gates you just have to undo that weird Clifford at the beginning and then redo it at the end okay, nice, so thank you any more questions? I think that's good now thank you very much Ben for your I respect that this lecture had some more obscure content in it but it is state-of-the-art stuff so you're now ready to begin correcting errors officially baptized I guess so we have 10 minutes break please come back at 5 sharp because of the last session has to start on time due to IT support okay, so see you in 10 minutes