 Okay. Sure. Oh, that looks kind of funny up there. The screen up there. Should we. Or can we just alert your mindset? That's fine. Yeah. Oh, sorry. Too many, too many screens. Yeah. Okay, cool. Sure. Cool. Okay. Hello. Please take your seats. And welcoming Karen Ryan Anderson back to our last lecture for today. And he will wrap up today's introduction. Very, you know, intense introduction to quantum error correction. So, okay, go ahead. Thanks. All right, so I will basically start off where Ben left off. However, I'm not a masochist or at least I don't like writing codes this way. If we take the same code, the torque code. Let's see. So we want to do, do, do. Oh, we want for these things. And whatnot. And instead of doing, instead of having the checks on the vertices and the cubits on the edges and the Z checks on the, the, the. Plakets. If we put the cubits on the vertices, that's where I like to put them because they look like little points and kind of reminds me of cubits. Then we have these like X, X, X, X checks. And then we have the Z, Z, Z. It's the same code. It's just a different representation. Sometimes this is called the, the medial code, the medial mapping of the code. And we can take this tour code that Ben described. And if we want to do everything in 2D, like some devices kind of is more natural for them. We can make basically a slice of this torus. And we can just add extra checks along the boundaries. And we get another version of the torque code this time though, the planar surface code. And particularly this is often known as the rotated planar surface code because the first version of it had more checks and you can like slice it in a different way and whatnot. That's not important. So this one is a 913 code. So it has one logical qubit unlike the torus that has two. Here we have, let's see, is this working? So it does go back and forth with the little magic dot thing. Doesn't seem to work. Okay. No problem. Sorry. Here we have a logical X operator going from top to bottom. So here, instead of having, there is a version of a logical operator, logical X operator that's kind of X, poly X on each of the qubits, each of the vertices. But here you can make the lower weight ones by just having a string of poly X's that you, so you apply an X on each of the qubits from top to bottom. And they're equivalent logical operators that just run from top to bottom. So as long as you can connect those top and bottom boundaries with a set of X poly operators, it's equivalent to a logical X operator. And then likewise, the Z goes from like horizontally, it connects the boundaries from left to right. As long as you have a set of Z poly operators that do that, then it is equivalent to logical Z. Now these, in these diagrams, I'm using similar diagrams like Ben discussed. I'm using the darker gray boxes to represent the operators where we measure joint X's, for those qubits that touch that polygon, those polygons, and for the lighter boxes, it's a joint Z operator that we're measuring. So these are the set of stabilizers written in pictorial form. So it's a lot nicer to work with than potentially this for larger codes. And yeah, so we have N minus one unique, or yeah, N minus one where N is the number of data qubits unique stabilizer generators. So that means, as Ben mentioned, each poly operator divides the space in half, orthogonally. And so that means that the space has two minus the number of things that divided in half. So that means if you do N minus, in parentheses N minus one, it all cancels and you get two to the end. So it's a logical two level system. So it's a one logical qubit. So that's what that T two means there. But also described this thing. So this is a distance three color code, or also known as the steam code for that, for the distance three version. And this is the distance five version of the color code. Notice there's lots of colors. And the surface code is all black and white. It is related to the surface code. Kind of very similar. It's also a topological code. Here though, the polygons don't represent just one type of poly operator that we're going to measure one type of observable. It represents two types. So for example, this big attempt. The pointer doesn't work. Each polygon, or once again, the vertices represent the qubits, the data qubits, and the polygons represent both the observable where you're measuring joint X's for all those qubits, as well as the other observable where you're measuring joint Z for all these qubits. So it's a self dual code. It's a CSS code. So CSS means that you can write the set of generators, either as only a Z poly type or X poly type. So you have a combination of X and Z purely X and Z generators. And this is self dual because the X and Z look exactly the same. And of course you can multiply those together and get why. So you could use, you could also measure the observables where their joint Y operators for those polygons. Unlike surface code, there's more symmetry, which is both a blessing and a curse or well, both a benefit and a con for the color code. It's, it means that it has a lot more, more logical operators that are possible. It also means that for each one of these boundaries. So you see this like X string and then this Y string and the Z string that kind of, if you imagine a triangle, those are often called triangular codes on like the square of the surface code. So it connects the, the, you know, the points on this triangle, each one of these operators do, but each boundary, these, these three boundaries can have an X, Y or Z logical operators. So you can, you can apply a set of poly operators that are either at all X's, which is equivalent to an logical X operator Y, which is closing to a logical Y operand Z, which is closing to a logical Z operator. So it doesn't really have this polarization that the surface code has. It's a little bit different. Yeah, slightly different characteristic. So I should say, yeah, this is kind of going to start out as sort of introducing you to some familiar codes that, that you might study in quantum air correction, but then we're going to go deeper. So this is kind of using as a, as a way to get familiar and then start working on like what do these circuits look like and how does all that work? Okay. So the, there's actually a bunch of different surface codes and a bunch of different color codes that you can potentially construct. It turns out if you for the surface codes where you only have either X type or Z type observables. So if you have X type of code on the vertices, then you get graphs that are able to support these type of codes that are too colorable and that have a degree of four. So each vertex only has four edges at most. And then you can cut little codes out of these graphs. So long as you like make sure that you have the right number of stabilizers and whatnot. And then similar for the color code, pretty bunch of pretty different colors. So that's probably that has to do with the three color ability, which also, if you have both X and Z type checks for both of these for, for these polygons and you want to make sure that everything commutes and whatnot, you end up getting a three color booklet graph. Okay. That's, that's, it's just to introduce that they, you might hear about color codes and surface codes, but there's all, there's all kinds of those codes that haven't been, that don't tend to be studied. You can do a lot more with them. There's also non uniform tiling of the plans that you can also cut codes out of. It also doesn't really matter. The geometry of these things don't matter. These are all equivalent graphs. So this 488 nomenclature right here just represents if you sit at a point and you kind of walk along. If you have polygons that touch that point, you can have polygons of, of, you know, with four sides, eight sides, eight sides. So all these other graphs are equivalent. So you can lay these things out, you know, you could, if you have a, a computer that has long range connectivity, you could like ions, you could have all sorts of geometries if you want. So the connectivity isn't really so important. These things just help us kind of visualize what the organizers look like in a sort of an abstract manner. So a lot of codes, but not necessarily all, although you can form families of them, a lot of codes have families of codes or belong to a family where you can kind of parameterize it by the distance of the code, which we've talked about before. So it's mentioned the distance minus one divided by two. The distance is slightly less than half is the number of, of flips either from various poly, poly operators that it can handle before, or yeah, that the codes guaranteed to handle. It's possible that codes can handle more than that, but it's not guaranteed to handle all the, like the same. So the distance three code, for example, can potentially handle weight two errors, but not all weight two errors. It's just guaranteed that at that, that T value, you can definitely correct that. Beyond that, there's some probability whether it can correct those or not. Anyway, but, but so as, as you make these codes larger and larger, as you can see, they can correct more and more errors. So that means they can suppress the logical error rate, you know, because we're spreading the information across many qubits. And, and let's see, verify what time. Okay, we're spreading information across many qubits. And so this means that since it can handle more and more errors, that means that, given that, you know, errors pop in with a certain probability, that means that we can squash them fast. You know, it's easier for us to squash them. It's, we can wait longer or we have more of a chance to squash them, you know, before they build up. Here's an example of a family, two different families of the color code. So the steam, this is the distance three one is the steam code, which Ben talked about. Then you can also construct larger versions of the color code by cutting out, you know, the codes out of these graphs that I mentioned. Common ones are either this, this 488 lattice, which is on the top, or the 666 lattice on the bottom, which is an ominous number. Okay, you might have heard of this word threshold, and maybe you've heard of the concept of a pseudo threshold. So the pseudo threshold refers to the performance of a code, like for a single code instance, not the entire family. So what we have here in this graph on the x-axis is the physical error rate and on y-axis is the logical error rate. And then this dotted line is when those two error rates equal each other. And so there's some point in which the code gets overwhelmed. And the probability of higher weights errors happening is too high. And, you know, you end up applying the wrong operation, you end up, you know, applying logical operations whenever you try to do your corrections instead of fixing the faults before they become an error. If it's low enough, then you can start suppressing the error rate. So you get like a p-squared effect, or if it's higher distance codes, like distance 5, you can go from p to p cubed and so on. So you can keep on quadratically suppressing the noise. And so the place where this curve kind of hits this x, you know, the x equals y line when logical, the logical and physical error rates are equal are known as the pseudo threshold. The pseudo threshold because it's not a full threshold, the threshold refers to the family of codes. I sort of kind of mentioned this before with the repetition code that, for the repetition code, as the number of cubits, the number of bits goes towards infinity, then you know that as long as less than half of those bits flip, the majority vote will send you to the right outcome. The same sort of idea with quantum error correction codes. What happens is as the distance increases, as I mentioned before, you get steeper and steeper slope. So you suppress the noise more, so you get higher power, you know, you go from, as I mentioned, p-squared, p-cube, p to the fourth and so on. And eventually as the distance goes towards infinity, effectively you get a step function. And so that dotted line where that step function is tells you that so long as the physical error rate is lower than that, there's some arbitrarily large code in which you can arbitrarily suppress the noise. Of course you won't live in asymptote, but you can choose a large enough code to do whatever you need, giving your physical error rates. So you obviously, if you can reduce your physical error rates, and you want to do that because you kind of get more out of the code. It's, you know, you drop faster. It's also good to be aware of, if you're trying to run simulations where you're trying to like work out these thresholds, like you sometimes see in papers, you need to look at, so the first sort of distances, like distance three, five, seven, there are curves kind of wiggle about. So eventually these curves kind of settle down into sort of a trend, and you can kind of, you know, fit those curves to a function, a higher order function, and work out what sort of the inflection point is, which corresponds to this threshold. But the first sort of small codes have this small size effect. So the curves are kind of settling down at first. So you can't really use like distance three, distance five, distance seven, in order to work out what your thresholds are. You have to kind of start out maybe at nine or 11 and then work up. So just in case you do those sorts of simulations in the future, maybe for the hackathon, maybe not. Yeah, and this is how the logical error rate roughly scales. You can kind of see that if you ignore the PTH, which represents the threshold error rate, P represents the physical error rate, then D plus one over two is really the same thing as T plus one. So, like I mentioned, a distance three code would suppress the noise by T plus one, so two P squared and so on. This is a shameless plug. I do have a package that does quantum error correction. You can take a look at it there. You can potentially use that to do simulations. Anyway, so I think Ben briefly touched on measurements. But I'll talk about those sort of again in a different way. I'm mainly focusing on how do these circuits work whenever we want to measure these observables for generic, you know, poly operators that we want to measure. So this can like dot with a line and P is to represent a control poly. So that's equivalent to, you know, if the qubit is in the zero state, apply identity, the qubits in the one state for the control, apply the poly. And that, you know, you could put an X in there and that's the equation for the CNOT or the control X gate, whatever you want to call it. So we're going to work out. Oh yeah, sorry. So we have this circuit here. And what that turns out to be is one way of measuring polys. So we introduce an insula, we apply a Hadamard, we do this control poly, another Hadamard and then we measure in the Z basis. So why does that work? I think Ben maybe showed one way to prove this and I guess I'll prove it again. Another way might not hurt to see how this works. I think it's fairly straightforward proof. So if we start out on the left hand side, you know, this, we're in a zero tensor psi state. So the zero is the control bit or going to be our control bit. Then we apply this Hadamard so we get a plus state. So this is this, you know, this thing, that's just, we've seen it a bunch of times. This is the plus state superposition of zero and one. And if we apply this control, control poly gate, we see that, you know, we leave, leave psi alone, if it's so I'm just, I'm doing a couple of steps. First, I'm, I'm converting plus into, you know, a superposition of zero and one. And so if it's zero, we don't apply the poly we just do identity so we just have psi so that's that first part. If it's one here. Then we apply the poly, according to that, you know, equation above. Okay, hopefully, that's straightforward. And then we apply the Hadamard again. We've, that means we flip the control bit. And that minus state is just, you know, if we put a minus here instead of a plus. And then if we, you know, move that square root from the plus and minus states over to the side to multiply it with that one over the square of two. Then we get that one half, and then here I'm just, you know, just rewriting plus and minus as you know, zero plus one, and then over on the right hand side, zero minus one. So I'm just re substituting things in. So you get that. And now we're going to rearrange the equation where we pull out the part where where it's all zeros. So we see is zero, you know, for each one of these two terms we see a zero in front so I'm just going to group those over to the side. I'm going to group the one over to the other side, because we're getting ready to measure in the Z basis. So it's convenient to do that. And then I can pull the operators away from the psi. So you know, the first term is psi plus P times psi. So I can just write that as I plus identity all times psi. And then on the other side I minus P times psi. That's straightforward. So this is a bunch of, you know, algebra steps and substitution. And we're just getting ready to see what happens if we we measure in 01. So if we measure in 01. If we get a plus plus outcome, it's in, you know, that we know the control, we collapse to the state where the qubits, the control bit is zero. And if we can get a minus we collapse where the control bit is a one. So that's just, you know, taking the same equation above and just based on those two conditions we get that those outcome and I'm ignoring maybe some normalization going on. And we've seen these one half plus or minus P before from Ben. But if you want to do the math. So let's say, I identity is just equivalent to taking the, the, you know, either the Z or the X or the Y basis, and taking the, you know, the part of the state that the projects and, you know, that's the positive value state, the projector for that, plus the minus eigenvalues projector, and some of those together so that together that projects to the entire state, or the entire space so that makes sense that it's identity so we can just write it as the plus P projector and the minus P projector for where we can substitute P for X or Z or Y. And X, Y and Z can all be written like this in terms of their projectors, we, this might make sense if we look at the Z, for example, if, if a qubit is zero, then we leave it alone, if it's one, we add a phase to it. That's where the minus sign is in the middle. It turns out, in their own basis states X, X and Y also do the same thing. If we look at it, what we label X, Y and Z, it's just convention they should all look symmetric in their own basis states. So it should all do the same thing. So in general we can write down that P is equal to plus the plus P projector minus the minus P projector they'll do the same thing as some weird thing that flickered on the screen. They'll do the same thing as, yeah, applying, you know, minus to the part of the space that's in the minus subspace and applying plus to the to the other side. So hopefully that's all makes sense. So we can do a bunch of math and substitute those values in and it turns out that this is indeed, you know, one, one half plus P is indeed the projector for the positive space of the poly, poly operator and one half I minus P is the minus side of this cool bunch of annoying algebra and whatnot. But in the end of the day, who cares about that math. What it tells you is that this circuit, you know, just through a few simple steps we can show that what this circuit really does is it projects us into the plus subspace of the poly operator or the minus one based on whether we get, you know, plus one or minus one. That's really the important part. It's easy enough to go through all that map, and it's kind of boring, but whatever. It just proved that it really is projecting us, you know, you know, if we get a plus one, we really do get, we really do get projected into the plus one part of the poly space, and so on. So that's cool. And this argument in general works like it's not special that it's a single poly operator surf for a single cubit, it will work for the same thing for multiple cubits, you can kind of, can you sort of easily see it. Or you can take my word for it. So that means if we're trying to measure a joint poly operator that in general we can use this circuit here. So that's really what I wanted to get to. So you can forget all that math, if you want, and just believe me that we can use the, you know, we can introduce an Encila, do a Hadamard, do these control polys, another Hadamard and measure it, measuring the z basis, and we are effectively projecting into these plus or minus of these poly operators, or another way of thinking about it is we're measuring the eigenvalue of those those poly operators. So this is how we measure our parity measurements. So they're all doing, they're all the same concept effectively. And we can see if we're trying for it like for example the surface code or the tour code or whatever, and we're trying to measure it like xxxx, then this just becomes a bunch of control knots, because we replace those polys with Xs. You can put the box there you can put the circle there it doesn't matter. So you get this thing. So this is the circuit for like the surface code, whenever you're doing the x checks. You can also see if we don't really want to think about all that math, and we just write down the circuit and believe that that if we, you know, input an x fault into this circuit, then it'll just commute with everything and just fly through. And so it's doing what we've seen before, if an error, if a fault commutes with the operator, the operator doesn't detect it it only detects. When are things anti commute with it. So if we apply a Z operator using sort of the rules that we've seen before, because Z, yeah, Z will will hit those targets and propagate down. And then it'll get flipped to an X operator. The Z measurements will pick up on Xs it'll look like a change in the measurement outcome. So we get a minus one. We can also see if there's an odd number of Zs that come in, then we get an odd number of, you know, Z, or Xs that hit the measurement. And so it will only light up you'll only give us a minus sign, if there's an odd number. And then if there's an even number, the minuses will cancel out. So once again, it's measuring the parody, you have an odd number of errors, you get minus one, even number of errors, you get plus one. So that's the other thing that we've, we've, you know, already thought about before. Cool. This is all leading to something like to think about how circuit level errors of faults work. So we can repeat all this stuff for Z. That's the same sort of story. I mean it's just the symmetric. However, people tend to write these in terms of CNOT so you'll more commonly see it with CNOT going down so we could replete you know do these controls Zs that's a perfectly fine circuit, you know, using using this sort of rule that we can just put put in whatever poly. People tend to put in these identities these Hadamards you know Hadamard types of Hadamard between each one of these gates in order to convert it into a bunch of CNOTs that point down. So if the CNOTs point down towards the Encila, it's a, it's a Z check, if the CNOTs point up. It's an X check. And then same sort of thing. We've already seen this effectively. So yeah, these are these are the typical circuits that we use. This is, there are other ways to measure syndromes, and this is sometimes referred to as bare Encila syndrome method, because we're using a single bare, just a single syndrome, or sorry, a single Encila. As I mentioned, there's other ways like shore style syndrome extraction, student style syndrome extraction, Neal style syndrome extraction, they require more Encila so they can be more overhead they have different advantages they can. A lot of them require you to measure operators more than the you would with the bare style, some of although some of them have single shot. So there's much more to read, but typically people study the bare style syndrome extraction so we'll forget those other styles but just to make you aware, there are other ways to do this too. So it's not unique. So go back to this. Oh yeah, what about so so far, we've only considered input faults into these circuits. What happens if that doesn't occur. What happens here, we see that the X in this X check is X parity measurement that we're trying to measure. The X fault will propagate up, and it'll go over to the Hadamard on the measurement branch, it'll flip to Z, Z commutes with Z so it doesn't actually like doesn't flip the measurement result. It'll go to the plus one. So you don't get any alert that a fault has happened, but not only has we do we not get an alert about it, but we get a wait one thing happening, wait one error, or fault happening, becoming a wait to fault. That's good, because that potentially can lower the distance of our codes as effectively as though we have, you know, with probability P we have a higher weight things occurring so that means that we need a larger code to deal with that potentially. And then of course the symmetric thing happens for the Z check. Okay, that potentially making nervous. However, for various codes there's ways to mitigate this. So, for a while, it was believed that that you just have to deal with this and make larger distance codes. However, it turned out I think Fowler discovered this, that it turns out that you can choose the schedule in which you do these C knots, so that the, like you do get these higher weight faults occurring. Like so we've already seen. However, you can by choosing the schedule you can make sure that like the C knots, you can measure all these seats, you can measure all these operators in parallel. And you can also make sure that you choose the direction in which the faults occur. So if we zoom up. And over on the left hand side. We kind of see a little bit of the X operator running up, and that purplish thing. Let's just represent an error is perpendicular to it. So this is an X error that's growing perpendicular to the logical, the logical X operator. And then the green fault here is, is a Z type fault, but it's also perpendicular to the logical Z operator. Eventually, we're not growing the errors in the direction in which they're applied, they would lead to a logical error it kind of looks like if you you know look in the right direction that's a wait one thing, effectively, in that dimension. Also, I should mention that you can also think of the surface code also like a repetition code, where you're taking the bit flip and Facebook code and kind of see it along the edges, and then kind of doing like a Cartesian product of the two. So this is like by learning the repetition code you kind of see it all over the place as well in other codes. Okay, what about this color the color codes of the steam codes. Also for a while, even longer because they're less studied than than the surface codes. So I thought we had to deal with these bad errors these bad errors are sometimes known as book errors because they kind of you know go up and over kind of like a hook. Of course, using the same sort of circuitry. If you have an X input error, then it propagates down to, or, yeah this is a Z check, you see the CNOT going down the X error on this Z check will propagate down, and the Z check will will detect this X error. And so in this code, for example, we the X error is being applied to keep it seven over in the corner and anti commutes with this red Z check. And so that kind of like locates effectively assuming that it's a low weight error, where the X error occurred. However, what happens if we measure this X check and an error occurred like we saw before in the middle of the check. Okay, it propagates down and once again, it's silent. This check is, but then we'll end up at some point measuring Z checks. So, over here is the results of if we then measured Z checks so we get plus one plus one for the green and and blue over on the left hand side, because each one touches the the X faults and even number of times so it doesn't flip it, we're detecting the X errors, but it, it touches that red Z check over on the right, an odd number of time once. So it flips that. That looks exactly like what we just saw. So, you're more likely to choose a correction where you flip you apply an X to keep it seven. And if you do do that, you'll get a string of X's. It turns out that string of X's is also a logical operator. So now we have a weight one error. If you include the correction, leading to a logical failure or logical error, it's in the proper error, not a failure. So you might think, okay, I just have to increase the size of the code and deal with it. Turns out, however, that we can supplement our circuit, we can introduce a flag in qubit and additional qubit. And, and we can see that effectively if nothing happens it acts like an identity so those controls kind of can can you can. So if the controls touching and these seen us they can go past each other they commute. So we can bring these seen us over to each other and they cancel their identity so if nothing happens this is just a fancy identity. However, if this internal fault happens. The, the X error does propagate down, and we get, you know, a non trivial result we get a, the, the, this outcome flipping. So flag these internal hook errors. So this allows us to, to, you know, modifier circuit, and get back the distance of the code. So that's cool. So yeah, that's, that's mainly the stuff that I was going to cover. I actually don't know. I assume it'll be what's six o'clock that we stop at. Yeah, okay. So that that's mainly what this kind of shows how you have to deal with fault tolerance so. There's a lot of people that will use the phrase, something is, is fault tolerant if, if the, if you're kind of in the regime in which the code is suppressing noise. It's a good use the term it kind of conflates kind of it being beneficial quantum error correction. I usually like to reserve the word fault tolerant meaning you can handle any T errors or T faults and recover it yourself. So it's T fault or K fault tolerance sometimes called, and that's a better. Annoyingly you'll see this, even by people that know stuff that they'll, they'll use the same word fault tolerance to mean to or maybe three different things and that's kind of an annoying thing in the field so sometimes you have to be clear about what they actually mean. Anyway, so that's a little rant, but, but this effectively showing how you have to do things in order to ensure fault tolerance that you can, you can ensure that your code is able to correct all faults to up to the what the code was designed to be. All right. If you are doing the challenges for the quantum error correction stuff, depending on the challenge, you might want to think about doing simulations or you might, you know, just after this, just get interested, or just read the QC paper and want to know how what what whether what the QC paper is actually saying, you have to be careful about one of your reading QC papers, the error rates that they report, because the error rates really depend on a lot of things like the circuitry that you're using as we've seen like different circuitry can lead to different errors, but also sort of the classical algorithm that you're using to come off the corrections, as well as the level that the type of error model, the type of noise that's being applied to the circuitry will change how the code performs. All those things are super important. So there's not one size fit all number for like the threshold of our code. But often, because it's a lot easier to simulate people will also use a simpler noise model. And, and you shouldn't be confused by because like, yeah, let's let's just go through. So this is a list of different common noise models from easier to harder, and they typically. The thresholds will be higher, and then quickly drop as the the you look at more and more details. So the simplest noise model that's not not as much people look at but it's definitely more popular in like classical error correction but some people do look at in quantum error correction because it is a useful noise model. So you can just erase your channel, the recent original noise model essentially what you do is, you just, you know, take a qubit, and you just completely depolarize it or you think about like removing it, all the like, like projecting it out, and then replacing it later on. But you get to know that sounds horrible but you get to know in this, this model which cubits have done that. So extra information allows you to actually do do a lot a lot with, like, with that you can come up with classical algorithms or to fix your code, come up with corrections, a lot easier than some other noise models just by the fact that you know where the where the cubit got deleted is the, like, if this happened in an actual physical device you have a way to herald whenever errors happen that that helps you out a lot, because, you know, as mentioned many times, the quantum problem with these codes is spread across the, the code, and there's many different logical operators. And so you can you can actually start deleting parts of the code. And as long as you are able to reset them and remeasure these checks which project you back into a code space, then you can recover from it. So long as you don't delete too much of your code is because no one cubit holds the information is the information is spread across the system. The next and pretty common noise model that people study is known as the code capacity model. And that is basically tells you the capacity of the code in a very abstract sense. So you don't have any of the details, we don't have any circuits here, but we don't, you don't have any of the details of the circuitry. You kind of pretend as though you have magic, a magic way of just measuring these parody checks without any error. So you could just get to do that, and you just study what are the, like if you have input errors. So the stuff that we were originally talking about just like, Okay, if we have have these input errors, and we could just measure the checks. We could just detect whether things commute or anti commute. That's what this level of noise model is so you just choose which cubits have with probability P have x, y or z noise and then you measure the parent, the checks. And then you get, you know, perfect measurements from that. The next level of difficulty is, it's a bit of a mouthful the phenomenological model. And that's, you know, based on like what is the overall phenomenon looking like. So we do still have these input errors, and we do have magic, you know, magic ways of just measuring these checks, where we just detect the parody. But now we add in the fact that that we model that that there's some probability in which the measurements can flip. So we have a bit flip channel on the measurements for these parodies. And that alone makes the decoding process this classical algorithm that comes with corrections much harder. And for a lot of codes, except for codes that have what's called shingles single shot syndrome extraction. You typically have to measure this. These checks are these stabilizer generators, multiple times, effectively you're kind of measuring a repetition code in time. You're just repeating it, and you're, you could do the majority vote on those things. But they're more complex algorithms in order to do with the correlations and all sorts of stuff. So you have to, you effectively do a repetition code in time so the repetition code once again is popping up. And just by the fact that you have these measurement flips happenings because you can't rely on the actual measurements so you have to correct, you know, have to deal with correcting the measurement outcomes as well. And then we get a course to the circuit level which modeling where we start applying noise at the circuit level so you actually care start to care about what your device actually is doing and there's various levels and which people actually run these models in the circuit model. Traditionally in the past people just look at the depolarizing channel. However, we and others are starting to model more exotic noise like leakage and crosstalk and coherent noise and so on to model what devices actually look like, and that can of course change the performance of codes. And in a more simple, yeah, even for that system, generally what you do is you, you like for each one of your ideal gates, you have some air channel that you then apply afterwards. So initialization you might do a bit flip or something. Two could be unitarious you might apply some random poly or something else. Two could be unitarious same sort of story measurement you might apply the some poly before it's if you like resetting the qubit or kind of equivalently you could do a bit flip on the on the measurement results and so on. So this is far more accurate, and you can like in, if I remember right, the color code has a code capacity threshold of 20%. So the color code has a code capacity threshold of around 10%. So the color codes better there. But then due to the complexity of the circuitry. Once you start looking at all that the surface code tends to have depends on who you read and the level of modeling that they're doing, but around 0.8% threshold once you look at the circuit so it's gone from 10% to 0.8. And then the color code is around point, I think three eight or something roughly 0.4%. So half, half the threshold at the circuit level, compared to the surface code. So it's it's dropped significantly and what was better, you know, seemingly a better performing code is now less performing. So it is constantly advances in like the decoding the classical decoding algorithms which can potentially improve the results so it's like no, like things can change in the future, which goes perform well, and, and so on. Looks like we have seven minutes. Okay. Yeah, so I mentioned before that that if you do start adding in noisy measurements which, you know, everything starting from the phenomenological model on have, then you have to deal with decoding in space time. So a common decoder that people study is men wait perfect matching decoder. You could try to encode all this stuff in a lookup table like we mentioned before, we're just kind of like write down the set of syndromes you might see, and then what corrections might you apply. However, you know, because you know, obviously if you have here here I'm using and to be the number of syndromes that you're measuring but if you're measuring the syndromes multiple times, and there's a, I guess really it's, it's, you'd have to multiply this, the number of time steps times the number of actual syndromes. And the number of syndromes actually grow based on the distance usually d squared or depends on the code, like this, this blows up really fast and you can't really compute these things. So you have to use an algorithm, not just store the stuff in memory but actually an algorithm in order to look at these like changes in measurement results so these nodes here represent flips in the syndromes and you try to have to find was the string of poly operators that that that most likely cause these flips that's what that sort of represents won't go into all that. Okay, let's see. I guess six minutes I guess I could either leave for questions or just keep on going. I'll keep on going. This I think maybe kind of be brief. I'm not going to cover things still not in depth. But so, as I mentioned before, codes don't have a universal gate set in general. There may be some fancy ways, but but at least of getting around this and maybe some future codes that are more exotic that kind of change in time and morph and do weird things might. But that's, you know, ignoring those sorts of codes. That's what says as says on the thing that knows no code has a transversal set of gates that are fault tolerant and universal. So that means we'll talk about what transversal means it's essentially that you're applying a single layer of gates so you just you don't have a deep set of gates that you're applying in order to apply these logical operations you're applying one one operation per data cuba up to that maybe less even. So no simple, you don't have these simple transversal gates that give you a universal set. So that means effectively that you have a discrete set of gates that you have to do for a particular code. So that's that that'd be a bummer and kind of mean that the QC game is all over and done with if there weren't workarounds. So of course there there are ways to go around like this eastern canal and other no go theorems. A common thing that people look at is what's called magic state distillation. So there is a actual proper phrase called magic in quantum error correction. And there you could also potentially because no one code has a universal set, you can potentially use more than one code to kind of fill each other, you know to fill up the set of gates that you need in order to construct a universal gate set. So you can potentially do code switching where you use one, one code to do certain gates and then you switch the information over to another code. That's potentially complicated and there have been papers that kind of show that magic state distillation might still have lower overheads, even if you do those sorts of things. But I mean, there's still, it's still an active area of research. There are other things like peaceful fault tolerance that also get around it and so on. Let's see how much time, three minutes. Okay, transversal gates. We've already seen some transversal gates like these poly operators you just playing a string of poly operators as a depth one circuit. Yeah, so we can, we've seen that that's both the color code and the surface code have these poly operate these logical poly operators this transversal gates. The surface code doesn't have a had a marred, which I think Ben briefly mentioned that's transversal, unless you do permutations which I think he also mentioned. It does have a CNOT gate that you can do transversally because it's a CSS code so all CSS codes can have these transversal CNOT gates. The color codes nice in that it has all these symmetries so you actually get all single cubic and two cubic gates as transversal operations. Let's skip this poly stuff. Okay, so the had a mart example, we apply had a mart to all the data cubits, then these x checks become z checks and the z checks become x checks, it switches the poly type. And then our, what was our x operator becomes our z operator because it's becomes all z's and vice versa. So it is on the logical level, switching between x and z like the had a mart should do so it is a logical had a mart. However, you notice that the code has now effectively rotated, and we, and we typically want the code to be back in its, its original state. So, if you're in a fixed 2d geometry device, then you might have to do a set of operations to do those rotations, or if you're nine traps, and you kind of get it for free because you don't really care like the labeling we just move things to gate zones. So we can see if we do somehow, how are we do it rotate the code around we get the same thing as the original except the X is over on the other side. However, as I mentioned before, any string that that's connects the top and bottom boundary is equivalent, you can multiply by stabilizers to get to move the thing over, and it still has the same logical information. So we've gotten back the code and we have indeed done the transversal had a mart. We can do it and it doesn't need to move things around because X and Z are symmetric so everything's symmetric, and you're good, and all the boundaries can have the same sort of poly x y and z type stuff so you're all good. One minute, okay. One minute, I can explain that that's why maybe not one minute, but close to one minute I can explain the transversal CNOT. So how do you entangle to code blocks so there are other ways to do these entanglements as mentioned as before, but we'll just focus on the transversal gate because it's a lot simpler. So the transversal gate for all these CSS codes all you have to do is as been mentioned, you put these, the corresponding CNOT in one block that's your control block. That's representing one of your logical qubits to the corresponding qubit and the other block that's your target block so sort of like this, so we have two codes, we're just stringing CNOT between them. By these rules, we can see that we can, like, we'll take that X logical string, and the X, having these CNOTs I'm just concentrating on the CNOTs in the boundary there's CNOTs everywhere this control, you know, touching the corresponding qubits. I'm just highlighting the CNOTs on the boundary, but using the rules of the CNOTs you get the X to come down. And so we get, you know, the X logical X on the top to be logical X on the top and bottom, and so on, like the logical X on the bottom remains the same. It doesn't change. It's there. So we get the same sort of rules on the physical that we see on the physical level on the logical level by doing these sets of operations. Cool. And the same sort of story on the logical level but in the reverse for the sorry for the Z side of things, but in the reverse. And now, okay, I'm over time, but I'm almost done. We do like X and Z checks on the top and bottom so S one and S two. If we apply this, the CNOT between all the qubits, then we do get a doubling of like S one, because the X flow down. As Ben has mentioned, we can rewrite our generators where we can just multiply stabilizers by each other. And so if we just multiply S one by S two and S two hasn't changed because it doesn't flow up from the target. So we get back to the original S one prime this case and S two so we get effectively the same original stabilizers and the same sort of story for the Z thing but everything flows in the opposite direction. So, I'll share these, these slides, and you can see that, at least here's a quick proof of why, why CSS codes do have transversal operations that entangle code box. Sorry for going a couple minutes over. Well, for this nice final talk today. We don't have anything planned after this so you are released for the evening. There is going to be I think dinner served down at the Adriatico at like seven ish but you should check in case you don't want to go somewhere else like Trieste and catch a bus. But in any case, that was a very nice day again day two I think you learned a lot. And keep in mind, you're learning here from experts. Right. So, it's very special that we can get you hold into here from all the important work you're doing so both for you and Ben thank you very much for this afternoon session. Good time.