 Like in quantum error correction where the information is cleverly distributed amongst several qubits The information in this lecture is distributed amongst two different lecturers so we have Ben Krieger and Kiran Ryan Anderson and so they'll be doing a tag teaming giving one lecture after the other So I think we'll kick off with Ben here. Thank you very much. The the company also doesn't let us travel on the same plane um Well, okay, so Whenever i'm lecturing i'm reminded of like being in kindergarten And you know talking to kids who are in kindergarten no matter what you say to these people They stick the hands of the go. Why right? Why are we learning this? Uh, and the job of the education system right in the government is to beat that impulse out of the children So they never ask why they're learning so they just learn whatever it is you tell them to learn And then they regurgitate that information at the end of the semester But luckily your experience i hope in government education institutions is now complete So we can resume asking why and we can do it right at the beginning of the lecture So that you know, you'll you'll have an idea of where we're going what we're trying to achieve There's a double-edged sword right to quantum computing These devices have exponentially large hobart spaces inside of them Which makes them really powerful right? There's all kinds of stuff that you can do in that space As long as you can design some clever measurement at the end or turn your problem into a wave and apply a Fourier transform to it But also that exponentially large hobart space makes them really tough to analyze Right good luck trying even to find out what the effect of some unitary matrix is on a vector when you're dealing with Um 200 qubits or something right even with you know arbitrarily large supercomputers It's um, it's going to be really tough Um, and you know, if you can't figure out what one gate does good luck designing an algorithm Good luck understanding data structures or any of the higher level abstract Um things that we want to do to create a real branch of computer science Uh, however, not everything that you want to do with a quantum computer requires its full power Uh in error correction me and this guy, uh, we are spending our entire lives trying to figure out how to get a quantum computer to do nothing Uh, and it's very easy to describe that operation You can do with one, you know symbol Right then this thing is worth a lot of money Um Because quantum computers can't do it, you know, well, they also can't do this So, you know, you want to prepare a zero state you want to do nothing You want to maybe measure in the x basis? None of these things are are possible with fidelity one you always get some operation. That's uh, kind of close to You know the x measurement of the zero prep or the or the identity uh, and so You know given that the things that we want to describe are already trivial until we compose them into big circuits And we're going to have to add a lot of overhead Um, how do we come up with some theory of quantum mechanics that allows us to process non-trivial states like highly entangled states for example In polynomial time, right? And today we're going to go over one way to get that done Uh, and that that way, you know, people call it Goddisman knill after some of its authors People would say stabilizer operations if they know about stabilizers most of the time I would just say Pauli's and Clifford's although You know given that this is Professional rather than academic we don't settle on terminology We just sort of juggle different terms around and if anybody gets confused we leave them behind for the wolves All right, so why Pauli's and Clifford's? There are many different efficient sub theories of quantum mechanics that you could focus on Some people do tensor network states where there's a limited amount of entanglement That's very good for condensed matter where you know, things can be at weird angles in different bases But the amount of entanglement is sort of area law limited There's also fermionic linear optics or match gates If you want to get a postdoc get good at one of these, um, sub theories Uh, you can even design error correcting codes in these sub theories But why would we do it with these Pauli's and Clifford's? There's a few reasons the first is versatility a lot of the, uh Problems that you see in quantum computing, uh, like finding the ground state energy of some Hamiltonian You look up the Hamiltonian and you see it's a bunch of Pauli matrices Um, some people in fault tolerance quantum chemistry focus on a problem called cubitization Where they take some problem and turn it into a bunch of Pauli matrices All right, given that they're doing all that work, you know, maybe we can Uh, sort of live nice easy lazy lives by just focusing on Pauli matrices And of course anytime you see, uh, you know fermionic creation and annihilation operators people would say Ah, do the Jordan-Wigner transform now it's easy, right? It turns it into Pauli operators And then there's a quantum error is an error correction We'll see later in The next lecture that If you can correct Pauli errors, then you can correct a lot of stuff Uh, and Yeah, nobody believes that they're gonna get their physical error rates You know down below 10 to the minus five without some kind of protection Like even the topological cubit people are edging on error correction They're doing like everything but error correction And they I I think they think they would get down to 10 to the minus six ish But if you want to run an algorithm with billions of gates in it, right? Then you can't tolerate thousands of failures distributed somewhere in the circuit because garbage in garbage out and You know everything's going to Become a gigantic analog mess Uh, and for this reason we design codes using Pauli's and Clifford's in order to Uh, do a bunch of active error correction in firmware, basically, right? So it's a it's a layer of the quantum stack is a term which I abhor that uh goes between the hardware And it sits just on top so that when you provide to the user or to the higher level routine or virtual operations that Right Pauli operators can be used to detect Pauli errors We're going to see this in a minute and there are these Clifford things that can be used to transform Pauli's into other Pauli's And the machinery for you know moving errors onto ancillary apparatus that can detect them is already neat and tidy Um, so yeah, basically ease of use Is one of the main reasons for this And all we're going to need to notice start is a little uh undergraduate quantum mechanics or maybe first year masters A little group theory and then we can get going really okay, so um Let's do quantum mechanics In like 15 minutes uh chapter two Nielsen and twang if you're not reading nielsen and twang You're doing it nielsen and wrong All right, uh, so Oh, I should have started my lecture with a joke, right? Variational quantum eigensoulvers Okay, that was a good one. All right Ah pure beef we can Uh, we can represent quantum states using vectors This is a postulate of quantum mechanics or like an axiom which is a fancy word for something I will say without knowing how to prove Uh, and that you know looks kind of like some normalized Way too vector no big deal Uh, there are operations as well, which are usually unitary, you know inshallah um You could have for example something like so one yeah square root of three Minus the square root of three and another one you could apply such a unitary to such an operation And it only really gets complicated when you start dealing with measurements Okay, so let's deal with measurements now These are going to be operators which are observables called o that will decompose right there I think everybody's familiar like these are hermitian matrices. They have real eigenvalues. You can diagonalize them Anybody who's not you know familiar with this should like pull the parachute right now And get sucked out of the electric because you might get lost later. Although. It's okay. You're young You can sit around to be lost for an hour. It's fine, but you would decompose one of these into Uh, so real valued eigenvalue and then These projectors pi j And when you perform a measurement Here is what happens according to the postulates of quantum mechanics You would receive an outcome Lambda j Sometimes multiple lambda j's are identical and so you won't know which j corresponds to which lambda j So you can project into a subspace rather than projecting into a state But you get out these lambda j's and you get one out with probability p j equals Whatever your initial state psi was that you're doing the measurement on You know projector psi so Basically, this is this is like an inner product between Uh, a rank one projector psi and this projector here. So you can think of it as like how much of this state is in this space Basically because it defines a norm and then psi gets mapped to right under the action of this projective measurement when you Discover that it was lambda j You get pi j psi Right, which is just the projection of this state into this space divided by Square root of pj and all this term does here is normalize the state again So this is like conditional if you know if If you were assured that the measurement had occurred but not told the outcome you would get a mixture of these things And it would be a convex combination with scaleless pj. All right We are not going to use the heisenberg picture, but I have to tell you about the heisenberg picture anyway All will be revealed No worries Let's imagine like a really complicated experiment in which Some state is prepared It's then exposed to some unitary evolution And then we measure out some observable Uh, can everybody read my handwriting by the way? Even people in the back are nodding. Okay, we're good Right, so what's gonna happen? The only things that we observe are these eigenvalues that come out of the end here right these little Lambda j's and they each show up with a pj where I have to stick in the evolved state now So I'll conjugate the evolved state psi u dagger, right? Pi j u psi and then the Projection takes u of psi Right this thing maps to square root of pj and I could just rub this thing out and Stick the dagger on there and now we see what happens to the initial state. It goes to U dagger pi j u psi. Okay, so Two things could have happened It may be that Psi evolved to u of psi And then got measured or it may be that the Projectors right and therefore the The operator itself sort of evolved backwards right under this u dagger pi j u And then was measured against psi and because the outcomes of physical experiments that obey the laws of physics Don't distinguish between these two cases. They're like equivalent And so you can use Operator evolution in order to say what has happened in some physical scenario rather than having to evolve vectors We're going to see an instance of this again, although it's not quite the same as the heisenberg picture Although the paper that this comes from is called quantum computing in the heisenberg picture or something At any rate, we're going to start evolving operators in a minute Apologies to the masters in hpc program. You should all go get a masters in hpc But I need blackboard space So It's going down Hpc criminally underrated topic though So now that we've covered quantum mechanics in 15 minutes, let's do like 10 minutes of group theory This is what it's like being in quantum computing You you wake up and you come to work not knowing whether it's going to be algebraic number theory Atomic physics This stuff And you sort of take whatever the day has to offer so if If you're kind of good at a lot of different types of math apply for a job or something All right Now again, I'm not a mathematician. So I'm going to start with a punch line Group theory is a way to cheat at matrix multiplication Uh using group theory You don't have to write out all of the elements of the matrix in order to Calculate matrix products. You can just look it up in a table and often or for big enough matrices that Uh, that can mean you take polynomial time or like linear time even so like a professionals polynomial Not like into the 10 or something rather than doing exponential time Um, but a mathematician would say, you know, a group is a set endowed with an operation, etc, etc Uh, and well, okay, we have to do that now. So a group is a set endowed with an operation There are you know, there's some set g with little elements, uh g and there's an operation which will take Two elements of g And give you a third element and typically you write it with a little circle or you don't write it at all So I would say something like, you know g prime g double prime equals g Triple prime, right? It's and you can think about it like multiplication Even though for some groups it acts more like addition And the the point of group theory, right is that you're abstracting away which mathematical operation it is You're just talking about its properties Uh, right, so every group by definition has to have some identity element i such that i times g equals g times i equals g for all g in the group and you have to have an inverse g inverse such that g inverse g is g g inverse is identity Okay, uh group theory has a ton of applications. There are a lot of different groups You can do like rubik's cubes with group theory Um, you can do robot arms. You can do cryptography. You can do whatever you want, but we're going to look at a group with four-ish elements Which is just some poly matrices. Okay, so Uh, our identity is going to be this two by two identity matrix We're going to have x y and z We've already seen these matrices pop up a ton of times So the fact that we will no longer have to multiply them at the end of the lecture, right? It's going to be very advantageous anytime you see one You'll be able to start thinking about it, you know in linear time rather than trying to remember Where the minus i goes for example Don't worry that one's correct And we will see that these things form a group because they're closed ish under multiplication. So for example x y Is equal to You'll see how look at how tedious this is Right I already know the answer, but I can barely like I'm writing so much Okay, it's i z That's a bit of a problem. I would rather not have the i here But luckily you can always define the group operation that you're going to use to be multiplication where I delete the i afterwards Um, that's fully mathematically legitimate, right? It provides a map from two instances at two elements of a group to a third It's a group operation deal with it. And then Let's see we can do a whole table. I forget what these tables are called but you can make a whole table Of let me do the little circle means i'm deleting the phase. Yeah, so I could take i x y z i x y z And I get identity identity identity identity What do we get here x y z x y z? And then this thing is going to be Well z y z x So now I no longer have to multiply these matrices whenever I see them right if I've done the work once Of showing that this is closed under multiplication Then there's only a finite number of things that can it can be and I can look them all up on this table Um, which yeah, okay. I can see that you're not rolling in the aisles, you know Uh yelling hallelujah. Yeah, it's not very impressive that I I know how to get rid of two by two matrix multiplication Um, but don't worry the next page these notes go on so Uh, we will also be able to do this for tensor products of poly matrices Oh, I left the cloth over here. I don't even need it actually Let's just make another weird ugly line do tensor products Did I already say that I would make these notes available? So if you want to write things down you can but everything I'm saying is like roughly in here So writing is optional If you're just not that kind of person, okay, so a tensor product Is how you describe mathematically operations that occur in parallel And it's it's easiest to see what the properties of a tensor product are if you do it diagrammatically Right, so I'm going to use this funny symbol. So who's seen the o times before latex o times most people fantastic Callum saw yeah very useful that you've seen It's the target audience guys. I work with Keep your hand down So a tensor b is represented by right a b So it's when I have A composite system. It's got two subsystems in it. I do the operation a to one and b to another And uh, you can see really easily one of the properties of the tensor product, which is how it composes with the series product Uh, and it's so important that I got to write it high up, but then I'm like leaving real estate Who cares? All right, so a o times b c o times d If I write this as a diagram, right, that's a b And then the next uh, have I done this the wrong way around? Whatever doesn't matter what these things are called It'll be correct in your notes Uh, so if I join these wires together We can see right just by you know sort of changing the bracketing Right, this is a o times b c o times d, but if I instead Oh, like if a if a tensor product war pants, would it wear them like this or like this, right? If I instead do this You can see that this is equal to ac times bd Where time runs in the opposite direction or whatever, uh, but it doesn't matter because the Names are consistent from here to here Uh, and this is a very valuable property because it allows us to Right anytime I I can multiply two by two poly matrices and get the answer just from table lookup I can do that here and here And if I had a longer tensor product, I could do the same thing And it would just be a linear number of steps rather than some exponentially big matrix Super convenient I'm going to go all the way over here to introduce a very tightly related property Because I'm out of board But that's also okay Most of this lecture will take the form of pro tips If you have eigenvectors And you take a tensor product, right? So it's a tensor b And then I feed that into You know the operation here then I get the product of the eigenvectors ab So anytime Oh, yeah And by the way people often write tensor products just inside the cat just a big list of symbols And that way you can do stuff like one zero zero one, right? Little compact cats That denote whole bit strings So anytime I take the tensor product of two operators, I get the scalar product of the eigenvalues Fantastic. All right, so With those pro tips out of the way, let's make our efficient sub theory of quantum mechanics. Did I define what a sub theory is? No uh let's so It's just a sub everything, right? Your theory of quantum mechanics includes a set of states a set of operations and a set of measurements So we're going to take a subset of the states a subset of the operations and a subset of the measurements And Well, okay, you could you know any old time you do that. That's a sub theory But is it efficient? Well, if you can describe all this stuff that I'm erasing right now Um using a polynomial amount of space and time then it's efficient and we will see how to do all these things you can go read About this if you want to so Let me recommend some reading over here Uh daniel goddisman's phd thesis It's from 1997, but it's still like the most up-to-date Reference on like the basic theory five oh five two Uh, yeah, and if any of you have to write a phd thesis soon, that's more or less how it's done Uh, and you can also go read errant's and goddisman before actually if you just go to daniel goddisman's website At perimeter, there's like a big list of everything that you should read instead of listening to me Uh, but I won't tell you where that is right now. You can learn later. You know focus for the time being all right, so If a pali is a matrix, how do I Uh describe a state using that matrix aren't states vectors? Sure But I'll just say that The states that we're going to use Are going to be What we call stabilized by some pali And that's to say when you multiply by a pali you get the plus one Eigen value, so it's a plus one eigen state of some pali operator And if I do this for one qubit, right then you know x Times the plus state that we saw earlier is equal to plus And you know for y there's something like Something you know plus i or i plus which is equal to you know one over root two one i Let me make sure that the dot Okay, and then z is maybe the easiest to describe because that's the zero state right one zero and Um, if I want to I can say I can also describe the minus one eigen states of these operators Which I'm going to do in a tricky way by putting the minus sign on the operator Right, so I'll say that the minus The minus state is going to be something like uh, you know one minus one Over root two And likewise I could put a minus sign here or here if I put the minus sign on the z I get the one state which when you multiply by the z you just pick up that 180 degree phase So, uh, this this works perfectly for single, uh, pali matrices. Let's do it for tensor products What we notice is if you take you know x tensor x on plus tensor plus All right, then you get plus plus no problem, but xx acting on minus minus notation is getting continually more compact Is going to be equal to minus one squared You know times minus minus which is just equal to minus minus So if I have a single pali that acts on multiple qubits, it doesn't specify a state on its own. It specifies a subspace How do I say that we should solve that problem in the notes? Um, ah first we should talk about how big this subspace is You know, if you want to keep specifying states that are in that subspace, maybe get it down to one state Um, hopefully then you got to know how big these subspaces are All right, here's where it gets a little bit tricky I'm going to denote the set of eigenvalues of an n qubit pali Lambda n for a minute Every one of these pali matrices has a plus one eigenvalue and a minus one eigenvalue Except for the identity which has two plus one eigenvalues and like minus identity which has two minus one eigenvalues But I can divide this so if I picture this as being a giant tensor product And I know that the eigenvalues are scalar products I can pop the first term off the tensor product and say that this set Is going to be equal to minus lambda uh for all lambda in lambda n minus one right the eigenvalues of the remaining pieces of the tensor product of the pali is everybody still following Yeah So I have some giant pali like x z z x i And I'm going to say the eigenvalues of this thing are going to be the eigenvalues of this thing times the eigenvalues of this thing okay uh union with lambda For every lambda, you know in lambda n minus one And if you try a bunch of examples, you will quickly become satisfied that Each one of the each one of these sets, you know for every time there's a plus one There's a minus one and vice versa And so each one of these sets is just going to be half plus ones and half minus ones Unless you do all identities or all minus identity, right? Right the all identity operator will have all plus one eigenvalues and then minus identity will have all minus one eigenvalues So each one of these operators cuts the space in half Right, so now with your little cs brains fully active filled with potato salad and caffeinated You're thinking aha i'm going to use a logarithmic number of palis Right each one cuts the space in half So in order to go from two to the n down to some constant I only need something like n of them rather than two to the n which is what the number of coefficients I would need in order to write out the ket and you would be right because You are a fictional version of yourself that lives only in my lecture notes right, and there's all right, so when Right, so this pauli acts on the state like so we say it stabilizes the state and Let me erase some more stuff Look at me checking the time like I'm not just going to go until I run out of notes Anytime you've got two operators that are both stabilizers. We'll call them s j and sk Right, and it's going to act on some psi if psi is stabilized by sk This is equal to you know s j times psi and if this you know operators also if this vector is also stabilized by s j This is equal to psi so Any state that's stabilized by these operators individually is also stabilized by the product Right, so we don't have to keep track of every possible pauli that could be stabilizing our vector Most of the time we only have to keep track of a generating set and then we can you know reproduce every element of the group a la minute whenever we need them Uh, right. I've already given the game way in the next section of the notes Stabilizers form groups and if you're a group theory head those groups are abelian which means all the elements commute Right and the reason that no anti commuting paulis are allowed Is because with this product relation if we had two paulis in here that anti commute You know if s j sk Let me you know some minus sk s j I could start taking products like s j sk s j sk Switch these two and I get minus s j squared sk squared The product of every tensor product of pauli matrices with itself is all identity. You can see on the diagonal of this table So this is equal to minus identity Which as we said does not have any plus one eigen states. So it can't stabilize anything Uh, right. So anti commuting paulis in reduces to your stabilizer space is zero dimensional go home So no anti commuting paulis allowed and if you're going to write down a generating set, they should all be independent under multiplication We will see quant America acting codes in which they are not and there are reasons why you would measure those that are redundant Right just in case there's a measurement error You might want to say oh, you know, I have multiple different ways to reconstruct my generating set so I can see whether they're all consistent But if you want to accurately describe The size of a state space by counting the number of operators Then you should make sure that each one really does divide the space in half And is not just one of these that you know doesn't divide anything in half given that s j and sk already stabilized the state in question Perfect. So that's how you describe states Uh, it takes about n squared bits of memory depending on how many bits per letter you use And let's see It's the 18th of april very useful information. It's also about 206 I think we can afford one example before we move on Actually, okay. Yeah, let me torture the students All right, so If you're really sharp, you know, then You'll already know this one, but here's the state that we've seen Already who can tell me a paulie that stabilizes this state? I will not continue until I see some hands who can tell me a Legit non-trivial paulie that stabilizes the state. I joke around you don't get to joke around My my bag is secure your career hinges on this Zed Zed, okay So we multiply zed zed on zero zero minus one. Sorry plus one squared or minus one squared right that Stabilized the state very good Do we need any more or is zed zed the only stabilizer of this state? why why Someone someone came to play today But i'm not going to put why why I'll put xx Why am I allowed to do xx instead of yy? Yeah, because when you multiply them you just generate whatever you want Uh, and you'll notice that I've ignored the i because there would be two i's that make a minus one Right, so If you flip you get one one here in zero zero here, and then the addition is commutative And that gets you back your original state All right, good people are roughly getting it fantastic. So now we're done describing the states of our efficient sub theory Let's start going from state to state with unitary operations So we will consider You acting on psi uh, we might pray that You know there will be some other stabilizer that stabilizes this state And i'm going to guess that there is and then derive that there is right, so let's say Let's guess that there's some sj prime That stabilizes u of psi That means that sj psi, you know by You know by hypothesis Is equal to u of psi However, we know psi is stabilized by something so we can stick a stabilizer in here common proof technique Uh, we can also stick an identity in here and that identity can look like u dagger u right usj u dagger Uh, u psi And then oh look This is our original state right so here is The stabilizer that stabilizes u of psi and you'll notice my dagger is in a different place than it was When I was doing the heisenberg picture because these operators are evolving forward in time So it's a heisenberg-ish picture right also Do these stabilizers commute Do they generate a group? Let's prove that they do Let's do generation first so usj u dagger times usk u dagger Everyone can already see where i'm going with this u dagger u Is eliminated and this is equal to usjsk U dagger so anytime I take the product of conjugated stabilizers That's the same as if I conjugated the product of the stabilizers Uh, so you know this is a stabilizer of the new state because The product was a stabilizer of the old state something the products of stabilizers. I still got stabilizers My new operators form a group just as my old operators did And you know because products are preserved Group commutators are preserved because those are just fancy products So it's abelian still fantastic right Um in order to do like real physics This is often enough But if you want everything to remain like efficiently computable uh a sufficient But maybe not necessary condition that I will nevertheless use today is that these operators Uh us u dagger also have to be pauli operators So I'm looking so all right We've got the group homomorphism property mathematicians. I said homomorphism You're welcome um Right So the the maps from paulies to paulies that are also unitaries. We will take this to define the cliford group There are a lot of things to say about the cliford group I think it's officially the finite automorphism group of the pauli group But nobody who works with these things every day says finite automorphism group So I often lose track of the terminology There are however three especially important clifords That you will want to know about in your life At least if you're here, I mean you don't need to tell your family over the holidays or whatever Unless they're also involved um, but Let's get in so name diagram Unitary matrix just one last time. I'm going to write out unitary matrices And then we're done with unitaries And So paulies. Oh, yeah, okay One of the things that you would need to say about clifords Is that okay Maybe maybe I shouldn't give you these notes, but I should update them first Because of this property here whenever I take the Whenever I want to calculate the effect on a product I can just calculate the effect on individual generators and then take the product afterwards That means that in order to specify a cliford fully I don't need to know all four to the n possible things that may happen if I stick some arbitrary pauli in I only need to know the effect of the cliford on a set of paulies that generate the pauli group under multiplication And that generating property gets us down from an exponential number of paulies to a linear number of generators So when I complete these tables now you will understand why they are complete, okay pauli maps So C0 is a cliford Some people put an x here. I usually put the bullseye thing Notation is not consistent and you know If you don't like it tough Unitary matrices are consistent though Right, so in the zero sector where we have the identity in the one sector we have this x to control the x operation but We quit writing these things and we begin writing these things so x tensor identity Maps to x x I tensor x goes to i tensor x z tensor identity goes to z tensor identity and z Well i z Goes to z z So this is interesting right the thing we think of classically as being uh, if this then do that Propagates information right in this basis from target up to control And this is the reason why if you surround a c0 with hadamard you get a c0 running in the opposite direction Which shows up a little more visibly in these pauli tables now speaking of hadamard hadamard is also a cliford Luckily notation for this one is fairly consistent Uh last time i'm going to write that unitary matrix fantastic and this one takes x to z and z to x And it takes y to minus y You may or may not need to use that minus sign watch yourself And then finally there's a gate which has no fixed name Uh, some people will say p you can say r Right in this in this set of notes i've called it s And it is well, okay, you can think of it as the square root of pauli z pauli z sometimes also comes with a factor of two So you always have to watch yourself You write it as an s or a p or an r write one i zero zero Is this unitary matrix and this one takes x to y and z to itself These three gates as you compose them can generate any cliford will i prove this no It would take too long But let's Let's do a pretty involved example i have 15 minutes left And we still have to do measurements i'm going to run long right now But that's okay everyone's having fun i've stayed focused i haven't goofed around so Let's prepare an entangled state x x x x i'm just going to write in a big table right these Stabilizers right z z i i i z z i and i i z z Uh, let's see. I don't want to play guess what the state is but We can so if you're trying to generate this state from a bunch of zeros Right, we can say, you know, here's what I want to do Z i i i z i i i i z Z i and i i i z right i'd like to try and find a cliford that does this Uh, there is an algorithm for doing this will I tell you what the algorithm is no I'm going to do what you're going to do when you start working with palates and clifords and just try some stuff Uh, so I know for example that there's entanglement in here that I'd like to break Right and you can tell when one of these states is entangled because it has you know weight Greater than one stabilizers that intersect each other if ever you see one of these tables that's block diagonal Like there's some stabilizers up here and some stabilizers down here But then identities in the off diagonal then you're dealing with two unentangled blocks um Maybe you can prove this to yourself We don't have a ton of time. All right, but let's take a look at what would happen If I ran a c not between cubits one and two of this state So in order to figure out what happens to xx I multiply the outputs of these two right so xx times ix is x i right so my xxx goes to xixx let me Say what's going on here c not one two Uh z z i i uh, so in order to well Let me be real fancy c not as self inverse c not times c not as the identity Which means if I read this table backwards, it's legit z z goes to i z So i z i i Nothing happens to this z Or does it now it goes to z z z i and i i z z Yes now This looks like it's entangled, but i'm allowed to multiply the stabilizers In order to generate a new generating set So i'm going to multiply this by that In order to put an identity here and now I can see that I have You know a one by one block right where I have identities in that row and that column So this cubit Is separated out right this is One state on cubits one three and four times You know the stabilizer state z right zero on cubit two We can see that this looks the same as that except on Three cubits instead of four So if I was to apply, you know a c not one three the same thing would happen I would get Another zero popping off and this would continue until I got down to one cubit where I would only have that weight one x stabilizer left So I can derive And you can run this circuit forwards for yourself That if I do this What I get out is This stabilizer state that I wanted I encourage you all to try this as an exercise In order to keep up. I mean not like right now, but you know snap a picture Take a note or whatever it is you do Uh, and that also turns out to be a pretty important state In quantum error correction that's called a cat state because it's a superposition of two macroscopically different bit strings Right so Now that we've seen a clifford circuit i'm immediately going to start using them as an abstract proof technique Because we're going from like zero to a hundred today. Oh, we started a few minutes late I get extra time no one can stop me right Let's imagine Won't you It's imagine A cloth that I can use to wipe down the board. It's just like dealing with my kitchen I have a five meter long table and the thing I need is always the other end Let's imagine that just like that last circuit, which is all the way over there We've got some abstract clifford c that we're going to use To prepare a stabilizer state And at the input we're going to put you know a bunch of weight one z stabilizers Right, these are like little zero states and we're going to prepare a big ket over here What if I was to leave out a few of these? Well, then there's a whole set of stabilizer states that we could be describing You know if we put a z in here that would be one state If we put an x in here that would be another state If we put a y in here that would be a different state So every stabilizer that we remove Doubles the size of the space that we can describe This is how we do codes Because a code you have to leave some Hilbert space in order to do computation in right It's not enough to use a single state. So We are always Leaving some cubits at the end of this abstract and coding map Right to produce logical cubits at the output Right whose operators Are like the stabilizers that would happen if I put you know a z or an x in here Right those become your logical operators And they have the same commutation relations as the Pauli's because they're unitary equivalent to the Pauli's And so they act just like Pauli operators, but on logical cubits I'm looking at the wrong set of lecture notes Fantastic So when you want to do a table or tableau a lot of people say tableau You have to I mean it's not enough If we said table it would be too like English and low class You got to have a tableau. Uh, so let's talk about one of my favorite codes We will have two stabilizers Called sx and sz z And then we'll have some logical operators X bar the bar denotes a logical operator or two And z bar two This thing is going to be x x i i i z i z x i x i and i i z z We can see that this anti commutes with that but it commutes with the other two For example, and any one of these logical x's or logical z's, you know, it has the appropriate commutation relation with its other Pauli's in the same table, right? So this code You know sits on four cubits describes two logical cubits And we can see there's like a hint of fault tolerance here In order to flip the state of one of these logical cubits It's not enough that I flip one of the bits. I have to do two Right, so two independent hopefully independent things have to go wrong in order for something to go really wrong When you're using this code and you can read all that information off of the tableau That's not always that easy, but We will make it easier in the third lecture. All right, let's let's do the last topic for this lecture measurements The hardest part of dealing with the stabilizer sub theory Luckily, we already detailed that any tensor product of pallium matrices has Plus one eigenvalues minus one eigenvalues and that's it and that means you can describe the set of projectors As One our identity plus p over two minus Identity minus p over two each one of these things is going to be like pi plus And pi minus those are the only two outcomes you can get from a measurement And when you do a measurement on a stabilizer space What happens depends on the commutation relations between the new pauli that you're measuring in and the existing stabilizers Let's do the easiest case first If the pauli is in the stabilizer group then The probability of getting plus Right is just Psi i plus p over two Psi But we just said okay, so stabilizer P is a stabilizer right so p psi equals psi identity of psi equals psi So this is just equal to one half You know psi psi plus Psi Psi Right One plus one is two so the probability of getting The plus one outcome is one And it should come as no surprise to you that when you project a stabilized state onto one of the stabilizers You just get that state back right it acts as the identity Both as a measurement and when you apply the operator Fantastic What if the pauli commutes with all of the stabilizers? But it's not itself a stabilizer right so it's one of these logical operators The joking around has has finished. We've gotten fully serious about measurement I think even in like software that I've written I would just skip measurements sometimes Uh, if I can all right, so We don't know what the probability is a priori right because p here is some new operator Psi can be some arbitrary superposition Of eigenstates of these operators. It doesn't have to be plus one or minus one with respect to any of But we can talk about the output state So we're so our output Psi is going to map to identity plus or minus p over two times psi Square root of pj sure it's normalized why not? Um s times i plus or minus p Here we get lucky s commutes with i s commutes with p and so it commutes with i plus p I can move it through here s stabilizes psi and so Psi maps to a state that is still stabilized by every s in the stabilizer right Uh, however There's a new operator in town Um, well, okay. There's almost a new operator in town If I put p here right then p Okay p times i plus p Is equal to p plus i Yeah is equal to i plus p but p times i minus p p squared is equal to the identity right, so this is p minus identity Which is equal to minus i minus p so If we get the minus one outcome p does not stabilize the state but if I was to Take this minus sign all the way back We can say minus p stabilizes that state if I had the minus one outcome So when you measure an operator that commutes with all of the stabilizers, but is not itself a stabilizer You're going to get the plus or minus one outcome And That pauli right either the plus or minus version of that pauli is going to join the stabilizer group Right, you're projecting the state space down into a state that's half the size by adding a new constraint onto the state and now The final boss of the lecture It looks like there's two pages, but there's just concluding remarks on this last page if The pauli we're measuring Anti commutes with a stabilizer Then we're going to get to see some more of these commutation relations. All right, so first off Let's say there's some old stabilizer that anti commutes with the measured operator i plus or minus p over Uh, two square root of p Ass and probability, okay This is going to be equal to one minus a plus p to root p sign Right this plus minus has become a minus plus so it's not stabilized And let me see Fine However Let's call this sj So there's going to be you know, if you're anti commuting with at least one stabilizer then One stabilizer that you anti commute with at least one Does not stabilize the state anymore. It's removed from the table But let's take the product sj sk of two stabilizers that we will assume Anti commute with the measured operator and I'll drop the two root p just so you can see what's up. This guy commutes past Right, so we'll leave room for it over here This commutes past sk We flip This sign once and then we do it again sj Plus minus These stabilize the original size you get rid of them and you wind up with The you know the the fresh state identity plus or minus p acting on psi is stabilized by the product of any two stabilizers that any commute So in order to figure out the new stabilizer generators You make a list of everything that anti commutes You pick one Your choice because it's the same as multiplying the stabilizers to generate a new group You you know you you blame that stabilizer for all of the anti commutation Multiply the remaining stabilizers with it until you get you know the biggest possible set of commuting generators That set of commuting generators and the measured operator become your new stabilizer And the single you know whipping boy stabilizer leaves the the table Has to go to the kids table Now with that we can all right Do we have just enough time? I think we have just enough time to do one last Circuit simulation So we're going to put it all together and do a five qubit algorithm in just a few minutes where I mean no offense to catherine, but I think if we had tried to do A five qubit circuits in the first lecture We would not have had a good time 32 by 32 unitary matrices. We would have run out of board All right, so Let's do a circuit that kiran and I both care deeply about Muscle memory at this point. How often I write this thing down. I put a zero state in I do his measurements at the end So My initial stabilizer tableau is going to have one stabilizer and then four logical qubits So i'm going to have i i i i z x Did I leave enough space? Z edge x z. I do have enough space So here's my initial code Right, you can see this is a bad code on its own You only need to flip one thing in order to flip a logical qubit, but this is a tiny component of a fault tolerance circuit. So Other measurements will be taking care of that the c-naughts Will map any weight 1x that starts here to x tensor x Right, likewise. This x is going to map to xx. This x is going to map to xx. This one is going to map to xx And the z let's see These z's Nothing is going to happen to them because they commute past the dot. No problem The only z that gets changed by this is the initial z here, right? This z stabilizer is going to become a product of z stabilizers on these four qubits and this guy at the output So I'll put you know Maps under c-naughts too I've got the corner of the board here. So I'll leave a little awkward space z z z z z as my one stabilizer x i i i x z i i i i x z x z x z x x x Right, and that's it. Everything else is an identity Yep, these columns line up. Yeah, do they make sense? Yeah, great Now I'm going to measure that z at the end that operator Commutes with the one stabilizer that I have so it's just going to join the table as a stabilizer Uh, I'm also going to assume that I got the plus one outcome and not mess with the minus sign Because you can mess with the minus sign on your own time However, it is going to any commute with these logical operators. Oh, no, I didn't cover that case What happens if it any commutes with a logical operator? Well, you should remember is that from all the way back over here with this big cliford that I'm using to prepare states A logical operator is just sort of a stabilizer in weighting So I do the same thing, right? I've I Multiply the logical group in order to find stuff that commutes with the measurement operator Right and it sort of plays the same role as a stabilizer because it would be a stabilizer Um, if we measured it Right, so my stabilizer is going to go up to z z z z z i i i z And then my logicals i'm going to Let's see. I'm going to multiply everything by this guy, right? So this whole logical qubit is going to leave The uh the group and then I'm going to turn this into like x x i i Mm-hmm x i x i i z i i x i This group Is also generated by right, so we'll notice that there are no Logical operators affecting the fifth qubit Right, there's one stabilizer affecting it, right? I can remultiply to do this So this state is now separable And what this means is I now have right i've i've carried out Through this circuit A measurement a projective measurement of this wait for operator And i've added and removed one qubit in so doing and these are just the remaining, you know three logical qubits that Satisfy that primitive stabilizer code still distance one, right? You still have a bad time if you tried to use this in real life Because you know you only have to do one z in order to mess your qubit up All right And that is it now you know how to do Everything that we do on a daily basis Commuting palities through clifards looking at the effects of circuits Well, okay We also have to like program a lot in order to get computers to do all of this stuff for us But I mean or grad students could do it So let us know if you're interested and with that I think kiran can take it from there And introduce the kind of things we're actually interested in are there any questions You understand everything perfectly Chris clear obviously, okay, absolutely. Okay fantastic. So we'll now hear from kiran. Do you want to have a break? Okay, a lot of nodding. Okay, let's take a break for a few minutes