 Oh, that looks kind of funny up there. The screen up there should we. Or can we just alert your mindset. That's fine. Yeah. Oh, sorry. Too many. Too many screens. Okay, cool. Sure. All right, cool. Okay. Hello. Please take your seats. And welcoming Karen Ryan Anderson back to our last lecture for today. And he will wrap up today's introduction. Very, you know, intense introduction to quantum error correction. So, okay, go ahead. Thanks. Cool. Thanks. All right, so I will basically start off where Ben left off. However, I'm not a masochist or at least I don't like writing codes this way. If we take the same code, the torque code. Let's see. So we want to do for these things. And whatnot. And instead of Instead of having the checks on the vertices and the cubits on the edges and the Z checks on the, the, the. Plackets. If we put the cubits on the vertices, that's where I like to put them because they look like little points and kind of reminds me of cubits. Then we have these like X, X, X, X checks, Z, Z, Z. It's the same code. It's just a different representation. This is called the, the medial code, the medial mapping of the code. And we can take this tour code that Ben described. And if we want to do everything in 2D, like some devices kind of is more natural for them, then we can take me basically a slice of this course. And we can just add extra checks along the boundaries. And we get another version of the tour code this time, though, the planar surface code. And in particular, this is often known as the rotated planar surface code because the first version of it had more checks and you can like slice it in a different way and whatnot. That's not important. So this one is a 913 code. So it has one logical qubit unlike the Taurus that has two. Here we have, let's see, is this working? So it does go back and forth, but the little magic dot thing doesn't seem to work. Okay. No problem. Sorry. Here we have a logical X operator going from top to bottom. So here, instead of having, there is a version of a logical operator, logical X operator that that's kind of X, poly X on each of the qubits, each of the vertices. But here you can make the lower weight ones by just having a string of poly X's that you so you apply an X on each of the qubits from top to bottom. And they're equivalent logical operators that just run from top to bottom. So as long as you can connect those top and bottom boundaries with a set of X poly operators, it's equivalent to a logical X operator. And then likewise, the Z goes from like horizontally, it connects the boundaries from from left to right. So as long as you have a set of Z poly operators that do that, then it is equivalent to logical Z. Now these in these diagrams, I'm using similar diagrams like Ben discussed, I'm using the darker gray boxes to represent the operators where we measure joint X X's for those qubits that touch that polygon, this polygons, and for the lighter boxes, it's a joint Z operator that we're measuring. So these are the set of stabilizers written in pictorial form. So it's a lot nicer to work with than potentially this for larger codes. And yeah, so so we have n minus one unique or yeah, n minus one where n is the number of data qubits unique stabilizer generators. So that means, as Ben mentioned, each poly operator divides the space in half, orthogonally. And so that means that the space has two minus the number of things that divided in half. So that means, if you do n minus in parentheses n minus one, it all cancels and you get two to the end. So it's a logical two level system. So it is one logical qubit. So that's what the T two means there. But also described this thing. So this is a distance three color code, or the also known as the steam code for that, for the distance three version. And this is the distance five version of the color code. Notice there's lots of colors. And the surface code is all black and white. It is related to the surface code. Kind of very similar. It's also a topological code. Here, though, the polygons don't represent just one type of poly operator that we're going to measure one type of observable, it represents two types. So for example, this big attempt. The pointer doesn't work. So each polygon, or once again, the vertices represent the qubits, the data qubits, and the polygons represent both the observable where you're measuring joint X's for all those qubits as well as the other observable where you're measuring joint C for all these qubits. So it's a self dual code. It's a CSS code. So CSS means that you can write the set of generators, either as only a Z poly type or X poly type. So you have a combination of X and Z purely X and Z generators. And this is self dual because the X and Z look exactly the same. And of course you can multiply those together and get Y. So you could use, you could also measure the observables where there are joint Y operators for those polygons. Unlike surface code, there's more symmetry, which is both a blessing and a curse or well, both a benefit and a con for the color code. It means that it has a lot more logical operators that are possible. It also means that for each one of these boundaries, so you see this like X string and then this Y string and the Z string that kind of, if you imagine a triangle, these are often called triangular codes on like the square of the surface code. It connects the points on this triangle. Each one of these operators do, but each boundary, these three boundaries can have an X, Y or Z logical operators. So you can apply a set of poly operators that are either at all X's, which is equivalent to a logical X operator Y, which is equivalent to a logical Y operator and Z, which is equivalent to a logical Z operator. So it doesn't really have this polarization that the surface code has. It's a slight different, yeah, slightly different characteristic. So I should say, yeah, this is kind of going to start out as sort of introducing you to some familiar codes that you might study in quantum air correction, but then we're going to go deeper. So this is kind of using as a, to get you familiar and then start working on like what do these circuits look like and how does all that work? Okay, so the, there's actually a bunch of different surface codes and a bunch of different color codes that you can potentially construct. It turns out if you, for the surface codes where you only have either X type or Z type observables and you put the qubits on the vertices, then you get graphs that are able to support these type of codes that are too colorable and that have a degree of four. So each vertex only has four edges at most. And then you can cut little codes out of these graphs so long as you like make sure that you have the right number of stabilizers and then similar for the color code, pretty bunch of pretty colors, but really that has to do with the three color ability, which also if you have both X and Z type checks for both of these for these polygons and you want to make sure that everything commutes and whatnot, you end up getting a three color book or graph. Okay, that's, that's, it's just to introduce that you might hear about color codes and surface codes, but there's all, there's a lot of other versions of those codes that haven't been, that don't tend to be studied. You can do a lot more with them. There's also non uniform tiling of the plans that you can also cut codes out of. It also doesn't really matter the geometry of these things don't matter. These are all equivalent graphs. So this 488 nomenclature right here just represents if you sit at a point and you kind of walk along and you look at all the polygons that touch that point, you have polygons of, of, you know, with four sides, eight sides, eight sides. So all these other graphs are equivalent. So you can lay these things out, you know, you could, if you have a computer that has long range connectivity, you could like ions, you could have all sorts of geometries if you want. So connectivity isn't really so important. These things just help us kind of visualize what the stabilizers look like in a sort of an abstract manner. So a lot of codes, but not necessarily all, although you can form families of them, a lot of codes have families of codes or belong to a family where you can kind of parameterize it by the distance of the code, which we've talked about before. We've mentioned the distance minus one divided by two, so just slightly less than half is the number of, of flips either from various poly, poly operators that it can handle before, or yeah, that the codes guaranteed to handle. It's possible that codes can handle more than that, but it's not guaranteed to handle all the, like the same, so the distance three code, for example, can potentially handle weight two errors, but not all weight two errors. It's just guaranteed that at that, that T value, it can definitely correct that. Beyond that, there's some probability whether it can correct those or not. Anyway, but, but so as, as you make these codes larger and larger, as you can see they can correct more and more errors, so that means they can suppress the logical error rate, you know, because we're spreading the information across many qubits. And, and let's see, let me verify what time. Okay. We're spreading information across many qubits, and so this means that since it can handle more and more errors, that means that given that, you know, errors pop in with a certain probability that means that we can squash them fast, you know, it's easier for us to squash them. It's, we can wait longer, or we have more of a chance to squash them, you know, before they build up. Here's an example of a family, two different families of the color code, so this is the distance three one is the steam code, which Ben talked about. Then you can also construct larger versions of the color code by cutting out, you know, the codes that are out of these graphs that I mentioned. Common ones are either this, this 488 lattice, which is on the top, or the 666 lattice on the bottom, which is an ominous number. Okay. You might have heard of this word threshold, and maybe you've heard of the concept of a pseudo threshold. So the pseudo threshold refers to the performance of a code, like for a single code instance, not the entire family. So, what we have here in this graph on the x-axis is the physical error rate, and on the y-axis is the logical error rate. And then this dotted line is when those two error rates equal each other. And so there's some point in which the code gets overwhelmed and the probability of higher rates errors happening is too high and you end up applying the wrong operation, you end up applying logical operations whenever you try to do your corrections instead of fixing the faults before they become an error. If it's low enough, then you can start suppressing the error rate. So you get like a p-squared effect, or if it's higher distance codes, like distance 5, you can go from p to p-cubed and so on. So you can keep on quadratically suppressing the noise, and so the place where this curve kind of hits this x, you know, the x equals y line when logical, the logical and physical error rates are equal are known as the pseudo-threshold. Pseudo-threshold because it's not a full threshold, the threshold refers to the family of codes. I sort of kind of mentioned this before, that for the repetition code, as the number of bits goes towards infinity, then you know that as long as less than half of those bits flip, the majority vote will send you to the right outcome. It is the same sort of idea with quantum error correction codes. What happens is as the distance increases, as I mentioned before, you get steeper and steeper slope, so you suppress the noise more, so you get higher power, you know, you go from, as I mentioned, be squared, be cubed, be to the fourth and so on. And eventually as the distance goes towards infinity, effectively you get a step function. And so that dotted line where that step function tells you that so long as the physical error rate is lower than that, there's some arbitrarily large code in which you can arbitrarily suppress the noise. Of course you won't live in asymptote, but you can choose a large enough code to do whatever you need given your physical error rates. So you obviously, if you can reduce your physical error rates, that's great and you want to do that because you kind of get more out of the code. It's, you know, you drop faster. It's also good to be aware of if you're trying to run simulations where you're trying to like work out these thresholds like you sometimes see in papers. You need to look at, so the first sort of distances, like distance three, five, seven, there are curves kind of wiggle about. So eventually these curves kind of settle down into sort of a trend and you can kind of, you know, fit those curves to a function on higher order function and work out what the, what sort of the inflection point is which corresponds to this threshold. But the first sort of small codes have this small size effect. So the curves are kind of settling down at first. So you can't really use like distance three, distance five, distance seven in order to work out what your thresholds are. You have to kind of start out maybe at nine or 11 and then work up. So in case you do those sorts of simulations in the future, maybe for the hackathon, maybe not. Yeah. And this is how the logical error rate roughly scales. You can kind of see that if you ignore the p th which represents the the threshold error rate p represents the physical error rate, then d plus one over two is really the same thing as t plus one. So like I mentioned, a distance three code would suppress the noise by t plus one. So two p squared and so on. This is a shameless plug. I do have a package that does quantum error correction. You can take a look at it there. You can potentially use that to do simulations. Anyway, so I think Ben briefly touched on measurements, but I'll talk about those sort of again to a different way. I'm mainly focusing on how do these circuits work whenever we want to measure these observables for generic you know, poly operators that we want to measure. So this can like dot with a line and p is to represent a control poly. So that's equivalent to you know, if the qubit is in the zero state apply identity, the qubit's in the one state for the control apply the poly. And that you know, you could put an X in there and that's the equation for the C not or the control X gate, whatever you want to call it. So we're going to work out. Oh yeah, sorry. So we have this circuit here. And what that turns out to be is one way of measuring polys. So we introduce an encilla. We apply a Hadamart we do this control poly another Hadamart and then we measure in the Z basis. So why does that work? I think Ben maybe showed one way to prove this and I guess I'll prove it again another way might not hurt to see how this works. I think it's fairly straightforward proof. So if we start out on the left hand side, you know, this we're in a zero tensor psi state. So the zero is the control bit or going to be our control bit then we apply this Hadamart so we get a plus state. So this is this you know, this thing that's just we've seen it a bunch of times is the plus state superposition of zero and one and then if we apply this control control polygate we see that you know we leave leave psi alone if it's so I'm just I'm doing a couple of steps first I'm converting plus into you know a superposition of zero and one and so if it's zero we don't apply the poly we just do identity so we just have psi so that's that first part and then if it's one here then we apply the poly according to that you know equation above. Okay, hopefully that's straightforward and then we apply the Hadamart again we've that means we flip the control a little bit and that minus state is just you know if we put a minus here instead of a plus and then if we move that square root from the plus and minus states over to the side to multiply it with that one over the square of two then we get that one half and then here I'm just you know just rewriting plus and minus as you know zero plus one and then over on the right hand side zero minus one I'm just re substituting things in to get that and now we're going to rearrange the equation where we pull out the part where it's all zeros so we see is zero you know for each one of these terms we see a zero in front so I'm just going to group those over to the side and I'm going to group the one over to the other side because we're getting ready to measure in the z basis so it's convenient to do that and then I can pull the operators away from the psi so you know the first term is psi plus p times psi so I can just write that as i plus identity all times psi the other side i minus p times psi that's straightforward just a bunch of algebra steps and substitution and we're just getting ready to see what happens if we measure in zero and one so if we measure in zero and one if we get a plus outcome we know we collapse to the state where the control bit is zero and if we get a minus we collapse where the control bit is a one so that's just you know taking the same equation above and just based on those two conditions we get that those outcome and I'm ignoring maybe some normalization going on and we've seen these one half plus or minus p before from Ben but if you want to do the math so let's say my identity is just equivalent to taking the you know either the z or the x or the y basis and taking the you know the part of the state that the projects and you know that's the positive eigenvalue state the projector for that plus the minus eigenvalue projector and some of those together so together that projects to the entire state the entire space so that makes sense of its identity so we can just write it as the plus p projector and the minus p projector for where we can substitute p for x or z or y and then x, y and z can all be written like this in terms of their projectors this might make sense if we look at the z for example if a qubit is zero then we leave it alone it's one we add a phase to it that's where the minus sign is in the middle it turns out in their own basis states x and y also do the same thing because if you think about it what we label x, y and z it's just convention they should all look symmetric in their own basis states so they should all do the same thing so in general we can write down that p is equal to the plus p projector that's the minus p projector that'll do the same thing as some weird thing that flickered on the screen that'll do the same thing as yeah applying minus to the part of the space that's in the minus subspace and applying plus to the other side so hopefully that's all makes sense and then we can do a bunch of math to substitute those values in and it turns out that this is indeed you know one one half plus p is indeed the projector for the positive space of the poly operator and one half i minus p is the minus side of things cool bunch of annoying algebra and whatnot but in the end of the day who cares about that math what it tells you is that this circuit just through a few simple steps we can show that what this circuit really does is it projects us into the plus subspace of the poly operator or the minus one based on whether we get you know plus one or minus one that's really the important part it's easy enough to go through all that map and it's kind of boring but whatever so we just prove that it really is projecting us you know we get a plus one we really do get we really do get projected into the plus one part of the poly space and so on so that's cool and this argument in general works like it's not special that it's a single poly operator so for a single qubit it will work for the same thing for multiple qubits you can kind of can sort of easily see it but or you can take my word for it so that means if we're trying to measure a joint poly operator then in general we can use this circuit here so that's really what I wanted to get to so you can forget all that math if you want and just believe me that we can use the you know we can introduce an encila do a hadamard do these control polys another hadamard and measure it measure in the z basis and we are effectively projecting into these plus or minus of these poly operators or another way of thinking about it is we're measuring the eigenvalue of those poly operators so this is how we measure our parity measurements so they're all doing they're all the same concept effectively and we can see if we're trying for like for example the surface code or the torque code or whatever and we're trying to measure xxxx then this just becomes a bunch of control knots so we replace those polys with x's you can put the box there you can put the circle there it doesn't matter so you get this thing so this is the circuit for like the surface code whenever you're doing the x checks we can also see if we don't really want to think about all that math we just write down the circuit and believe that that if we input an x fault into this circuit then it'll just commute with everything and just fly through and so it's doing what we've seen before if an error if a fault commutes with the operator the operator doesn't detect it it only detects whenever things are coming up with it so if we apply a z operator using sort of the rules that we've seen before because z will hit those targets and propagate down and then it'll get flipped to an x operator the z measurements will pick up on x's it'll look like a change in the measurement outcome so we get a minus one and then we can also see if there's an odd number of z's that come in then we get an odd number of x's that hit the measurement and so it will only light up it'll only give us a minus sign if there's an odd number and then if there's an even number the minuses will cancel out if there's an odd number of errors you get minus one even number of errors you get plus one so it's doing the thing that we've already thought about before cool this is all leading to something like to think about how circuit level errors faults work so we can repeat all this stuff for z it's a save sort of story I mean it's just a symmetric but we can do these in terms of c-naughts so you more commonly see it with c-naughts going down so we could do these control z's that's a perfectly fine circuit using this sort of rule that we can just put in whatever poly but people tend to put in these identities these Hadamards between each one of these gates in order to convert it into a bunch of c-naughts that point down so if the c-naughts point down towards the encilla it's a z check if the c-naughts point up it's an x check and then same sort of thing we've already seen this effectively so yeah these are the typical circuits that we use there are other ways to measure syndromes and this is sometimes referred to as bare encilla syndrome method because we're using a single bare just a single syndrome or sorry a single encilla there are as I mentioned there's other ways like sure-style syndrome extraction student-style syndrome extraction needle-style syndrome extraction they require more encilla so they can be more overhead they have different advantages a lot of them require you to measure operators more than you would with the bare style although some of them have a single shot so it's much more to read but typically people study the bare-style syndrome extraction so we'll forget those other styles but just to make you aware there are other ways to do this too so it's not unique so go back to this oh yeah what about so far we've only considered input faults into these circuits what happens if that doesn't occur what if we put an x here we see that the x in this x check is x parity measurement that we're trying to measure the x fault will propagate up and it'll go over to the Hadamard on the measurement branch it'll flip to a z z commutes with z so it doesn't actually like doesn't flip the measurement result and so it's still plus one so you don't get any alert that a fault has happened but not only do we not get an alert about it but we get a weight one thing happening or weight one error or fault happening becoming a weight two fault that doesn't sound good because that potentially can lower the distance of our codes as effectively as though we have with probability p we have higher weight things occurring so that means we need a larger code to deal with that potentially and then of course the symmetric thing that we're trying to measure with probability p we're trying to measure the distance for the z check okay that potentially making nervous however for various codes there's ways to mitigate this so for a while it was believed that you just have to deal with this and make larger distance codes however it turned out I think it turns out that you can choose the schedule in which you do these c-nauts so that you do get these higher weight faults occurring like so we've already seen however by choosing the schedule you can make sure that these c-nauts you can measure all these operators in parallel and you can also make sure that you choose the direction in which the faults occur so if we zoom up so that over on the left hand side we kind of see a little bit of the x operator running up and that purplish thing that's supposed to represent an error is perpendicular to it so that's an x error that's growing perpendicular to the logical x operator and then the green fault here is a z type fault but it's also perpendicular to the logical z operator so essentially we're not growing the errors in the direction in which they would lead to a logical error it kind of looks like if you look in the right direction you can see one thing effectively in that dimension also I should mention that you can also think of the surface code also like a repetition code where you're taking the bit flip and phase flip code and kind of see it along the edges and then kind of doing like a Cartesian product of the two so there's like by learning the repetition code you kind of see the same thing okay what about the color codes or the steam codes also for a while even longer because they're less studied than the surface codes people thought we had to deal with these bad errors these bad errors are sometimes known as hook errors because they kind of go up and over kind of like a hook so we do know of course using the same sort of circuitry if you have an X input error then it propagates down to or yeah this is a Z check we see the C nots going down the X error on this Z check will propagate down and the Z check will detect this X error and so in this code for example the X error is being applied to keep it 7 over in the corner and it anti commutes with this red Z check it flips it to a minus 1 it kind of like locates effectively assuming that it's a low weight error where the X error occurred however what happens if we measure this X check and an error occurred like we saw before in the middle of the check it propagates down and once again it's silent this check is but then we'll end up at some point measuring Z checks so over here is the results of if we then Z check so we get plus 1 plus 1 for the green and blue over on the left hand side because each one touches the X faults an even number of times so it doesn't flip it we're detecting parodies but it it touches that red Z check over on the right an odd number of time once so it flips that that looks exactly like what we just saw so you're more likely to choose a correction where you flip you apply an X to and if you do do that you'll get a string of X's it turns out that string of X's is also a logical operator so now we have a weight 1 error if you include the correction leading to a logical failure or a logical error it's in the proper error not a failure that's horrible so you might think okay I just have to increase the size of the code and deal with it turns out however that we can supplement our circuit we can introduce a flag in qubit an additional qubit and we can see that effectively if nothing happens it acts like an identity so those controls kind of can you can if you have controls touching in these sea knots they can go pass each other they commute so we can bring these sea knots over to each other in their identity so if nothing happens this is just a fancy identity however if this internal fault happens the X error does propagate down and we get a non-trivial result we get this outcome flipping so these flag these internal hook errors so this allows us to modify our circuit and get back the distance of the code so that's cool so yeah that's mainly the stuff that I was going to cover I actually don't know I assume it'll be what six o'clock that we stop at yeah okay okay cool so that's mainly what this kind of shows how you have to deal with fault tolerance so there's a lot of people that will use the phrase something is fault tolerant if you're kind of in the regime in which the code is suppressing noise I don't think that's a good use of the term it kind of conflates kind of it being beneficial of quantum error correction I usually like to reserve the word fault tolerance meaning you can handle any T errors or T faults and recover it yourself or K fault tolerance sometimes is called and that's a better annoyingly you'll see this even by people that know stuff that they'll use the same word fault tolerance to mean two or maybe three different things and that's kind of an annoying thing in the field so sometimes you have to be clear about what they actually mean anyway so that's a little rant but this effectively is showing how you have to do things in order to ensure fault tolerance you can ensure that your code is able to correct all faults to up to what the code was designed to be all right if you are doing the challenges for the quantum error correction stuff depending on the challenge you might want to think about doing simulations or you might after this just get interested or just read the QEC paper and want to know how what this QEC paper is actually saying you have to be careful about whenever you're reading QEC papers the error rates that they report because the error rates really depend on a lot of things like the circuitry that you're using as we've seen like different circuitry can lead to different performance but also sort of the classical algorithm that you're using to come off the corrections as well as the level the type of error model the type of noise that's being applied to the circuitry will change how the code performs all those things are super important so there's not one size fit all number for like the threshold of our code it's really dependent on stuff but often because it's a lot easier to simulate people will also use a simpler noise model and you shouldn't be confused by because like yeah let's just go through so this is a list of different common noise models from easier to harder and they typically the thresholds will be higher and then quickly drop as you look at more and more details so the simplest noise model that's not as much people look at but it's definitely more popular in classical error correction but some people do look at in quantum error correction because it is a useful noise model is the racer channel the racer noise model essentially what you do is you just you know take a qubit and you just completely depolarize it or you think about like removing it like projecting it out and then replacing it later on but you get to know that sounds horrible but you get to know in this model which qubits have done that so that extra information allows you to actually do a lot with like with that you can come up with classical algorithms or to fix your code come up with corrections a lot easier than some other noise models just by the fact that you know where the qubit got deleted is like if this happened in an actual physical device you have a way to herald whenever errors happen that helps you out a lot because you know as mentioned many times the quantum information in these codes is spread across the code and there's many different logical operators and so you can actually start deleting parts of the code and as long as you are able to reset them after these checks which project you back into a code space then you can recover from it so long as you don't delete too much of your code is because no one qubit holds the information it's the information spread across the system okay the next and pretty common noise model that people study is known as the code capacity model and that is basically tells you the capacity in a very abstract sense so you don't have any of the details we don't have any circuits here but you don't have any of the details of the circuitry you kind of pretend as though you have a magic way of just measuring these parity checks without any error so you can just get to do that and you just study what are the like if you have input errors so the stuff that we were originally talking about is the capacity checks and we just detect whether things commute or anti commute that's what this level of noise model is so you just choose which qubits with probability p have x, y, or z noise and then you measure the checks and then you get perfect measurements from that the next level of difficulty is it's a bit of a mouthful model and that's based on what is the overall phenomenon looking like so we do still have these input errors and we do have magic ways of just measuring these checks where we just detect the parity but now we add in the fact that we model that there's some probability in which the measurements can flip so we have a bit flip channel for these parity that alone makes the decoding process this classical algorithm that comes with corrections much harder and for a lot of codes except for codes that have what's called single shot syndrome extraction you typically have to measure these checks of these stabilizer generators multiple times measuring a repetition code in time you're just repeating it and you could do the majority vote on those things but there are more complex algorithms in order to do with the correlations and all sorts of stuff so you effectively do a repetition code in time so the repetition code once again is popping up just by the fact that you have these measurement flips happenings because you can't rely on the actual measurements so you have to deal with correcting the measurement outcomes as well and then we get of course to the circuit level where we start applying noise at the circuit level so you actually start to care about what your device actually is doing and there's various levels in which people actually run these models so even given the circuit model traditionally in the past however we and others are starting to model more exotic noise like leakage and crosstalk and coherent noise and so on to model what devices actually look like and that can of course change the performance of codes but in a more simple even for that system generally what you do is for each one of your ideal gates you have some error channel that you then apply afterwards so initialization bit flip or something single qubit unitaries you might apply some random poly or something else two qubit unitaries same sort of story measurement you might apply some poly before if you like resetting the qubit or kind of equivalently you could do a bit flip on the measurement results and so on so this is far more accurate and you can if I remember right the color code has a capacity threshold of 20% and then the surface code has a code capacity threshold of around 10% so the color code's better there but then due to the complexity of the circuitry once you start looking at all that the surface code tends to have depends on who you read and the level of modeling that they're doing but around 0.8% threshold once you look at the circuit so it's gone from 10% to 0.8 and then the color code is around 0.38 roughly 0.4% so half the threshold at the circuit level compared to the surface code so it's dropped significantly and what was better seemingly a better performing code is now less performing although there's constantly advances in the classical decoding algorithms which can potentially improve the results but things can change in the future which codes perform well and so on looks like we have seven minutes okay probably won't say yeah so I mentioned before that if you do start adding in noisy measurements which everything starting from the phenomenological model on have then you have to deal with decoding in space time so a common decoder that people study is min-weight perfect matching decoder you could try to encode all this stuff in a lookup table like we mentioned before where you just kind of like write down the set of syndromes you might see and then what corrections might you apply however you know obviously if you have here I'm using N to be the number of syndromes that you're measuring but if you're measuring the syndromes multiple times and there's a I guess really it's you'd have to multiply the number of time steps times the number of actual syndromes and the number of syndromes actually grow based on the distance usually d-squared or depends on the code like this blows up really fast and you can't really compute these things so you have to use an algorithm not just store the stuff in memory but actually an algorithm in order to look at these like changes in measurement results so these nodes here represent flips in the syndromes and you try to have to find what is the string of poly operators that most likely cause these flips that's what that sort of represents won't go into all that okay let's see six minutes I guess I could either leave for questions or just keep on going I'll keep on going okay this I think maybe kind of be brief I'm not going to cover things still not in depth but so as I mentioned before codes don't have a universal gate set in general there might be some fancy ways but at least of getting around this and maybe some future codes that are more exotic that kind of change in time and morph and do weird things might I guess ignoring those sorts of codes Easton Canilla says on the thing that no code has a transversal set of gates that are fault tolerant and universal so that means we'll talk about what transversal means it's essentially that you're applying a single layer of gates so you just you don't have a deep layer of gates that you're applying in order to apply these logical operations you're just doing like one one operation per data cube up to that maybe less even so no simple you don't have these simple transversal gates that give you a universal set so that means effectively that you have a discrete set of gates that you have to do for a particular code but that would be a bummer and kind of mean that the QEC game is all about if there weren't workarounds so of course there are ways to go around like this Easton Canilla and other no go theorems a common thing that people look at is what's called magic state distillation so there is a actual proper phrase called magic and quantum error correction yeah and there you could also potentially because no one code has a universal set you can potentially use more than one code to kind of fill each other you know to fill up the the set of gates that you need in order to construct a universal gate set so you can potentially do code switching where you use one one code to do certain gates and then you switch the information over to another code that's potentially complicated and there have been papers that kind of show that magic state distillation might still have lower overheads even if you do those sorts of things but I mean there's still an active area of research there's other things like peaceful full times that also get around it and so on um let's see how much time three minutes okay transversal gates we've already seen some transversal gates like these poly operators you just playing a string of poly operators is a depth one circuit yeah so we can we've seen that that's both the color code and the surface code have these poly operate these logical poly operators this the surface code doesn't have a Hadamard which I think Ben briefly mentioned that's transversal unless you do permutations which I think he also mentioned um it does have a C knot gate that you can do transversally because it's a CSS code so all CSS codes can have these transversal C knot gates the color codes nice in that it has all these symmetries do you actually get all single qubit and two qubit gates as transversal operations um transversal so let's skip this poly stuff okay so the Hadamard example we apply Hadamard to all the data qubits then these x checks become z checks and the z checks become x checks it switches the poly type um and then our what was our x operator becomes our z operator because it's all becomes all z's and and vice versa so it is on the logical level switching between x and z like the Hadamard should do so it is a logical Hadamard however you notice that the code has now effectively rotated and we typically want the code to be back in its original state so if you're in a fixed 2d geometry device then you might have to do a set of operations to do those rotations or if you're in ion traps and you kind of get it for free because you don't really care like the labeling you just move things to gate zones um so we can see if we do somehow how are we do it rotate the code and we get the same thing as the original except the x's over on the other side however as I've mentioned before any string that connects the top and bottom boundary is equivalent you can multiply by stabilizers to get to move the thing over and it still has the same logical information so we've got back the code and we have indeed done the transversal Hadamard uh the color code can do it and it doesn't need to move things around because x's and z's are symmetric so everything's symmetric and you're good and all the boundaries can have the same sort of poly x y and z type stuff so you're all good one minute okay in one minute I can explain that that's well maybe not one minute but close to one minute I can explain the transversal C knot so how do you entangle two code blocks so there are other ways to do these entanglements as I mentioned as before but we'll just focus on transversal gate because it's a lot simpler so the transversal gate for all these CSS codes all you have to do is as been mentioned you put these the corresponding C knot in one block that's your control block that's representing one of your logical qubits to the corresponding qubit in the other block that's your target block so sort of like this so we have two codes and we're just stringing C knots between them and by these rules we can see that we'll take that X logical string and the X having these C knots I'm just concentrating on the C knots in the boundary there's C knots everywhere this control you know touching the corresponding qubits I'm just highlighting the C knots on the boundary but using the rules of the C knots you get the X to come down and so we get you know the X logical X on the top to be logical X on the top and bottom and so on like the logical X on the bottom remains the same doesn't change is there the same sort of rules on the physical that we see on the physical level on the logical level by doing these sets of operations cool and the same sort of story on the logical level but in the reverse for the Z side of things but in the reverse and now okay I'm over time but I'm almost done if we have two like X and Z checks on the top and bottom so S1 and S2 if we apply the C knot between the qubits then we do get a doubling of like S1 because the X flow down but as Ben has mentioned we can rewrite our generators where we can just multiply stabilizers by each other and so if we just multiply S1 by S2 and S2 hasn't changed because it doesn't flow up from the target then we get back to the original S1 prime in this case and S2 so we get effectively the same original stabilizers and the same source story for the Z thing but everything flows in the opposite direction so I'll share these these slides and you can see that here's a quick proof of why why CSS codes do have transversal operations that entangle code box sorry for going a couple minutes over well thank you very much for this nice final talk today we don't have anything planned after this so you are released for the evening there is going to be I think dinner served down at the Adriatico at like seven-ish but you should check in case you don't want to go somewhere else like Trieste and catch a bus but in any case there was a very nice day again day two I think you learned a lot and keep in mind you're learning here from experts right so it's very special that we can get you pulled into here from all the important work you're doing so both for you and Ben thank you very much for this afternoon session yeah good time it's a good time