 Is it good? Can you hear me? Okay. Thank you So today is my last lecture. I'm gonna Talk about two things The first one I'm gonna discuss the quantum tanner codes so those modern LDPC codes that just appear with With a great breakthrough a year two years ago My goal will not be to explain the whole construction But at least it will be to give you a way to construct these codes I will not prove that they are good. We can discuss the intuition if you want and the second part is gonna be back to our Fulter and quantum memory based on LDPC codes We have built all the components and we have even better codes now So should we should we just quit surface codes and move to LDPC codes? So we can have a discussion about that I have a few slides prepared to support the discussion to emphasize the issues of LDPC codes and what they can bring I have been very optimistic until now describing the construction and what it brings how it beats surface code But it's not fully clear that we should move right now Okay, so first quantum tanner codes I'm gonna go through some Some references first to give you an idea of how this story evolved It started with Kit I have in 97 So it was a while ago. It was a first family of codes the surface codes the Toric code all these topological codes They can achieve a distance which is square root of n so like hypergraph for the codes we already add that in 97 thanks to Kit I have and We had only one logical qubit that's the issue in the surface code still today. We have only one logical qubit We like it. It's great. It's 2d. It's local, but it has only one logical qubit and a few years later there is a Geometric construction that give you a little bit more than square root of n and a little bit more It's really a tiny bit more. It's to the point where you are not even sure you should conjecture we can beat this square root of n And for a long time for almost 20 years. We had nothing better than that So people were maybe willing to conjecture that it's not possible to have a polynomial higher than n to have n 0.5 plus something and maybe there is just this polylog that we can that we can get and We have also negative result with this BPT bound that tells us that We cannot get good parameters with surface codes where we have a trade-off between The amount of logical information we can we can put in the code and the amount of logical information We can correct the number of errors we can correct and that tells us in particular that d that most square root of n So so we are stuck. We need we need a new kind of construction. We are stuck with those 2d 2d topological codes and that's when Hypergraph product codes appear It's till each and them all and they achieve a constant Number of logical qubit per physical qubit So k linear in n and the same distance as kit I have So now now we can encode a lot of information. It's it's a lot better we still don't know if we can beat square root of n and There is many variants of these codes. It's it's a it's a it's a new type of construction It's a very different object as you saw it's not based on Homology on many falls. It's based on some kind of Product of of graph product of complexes. It's generalized by Bravian as things as a product of Complexes of higher dimensional complexes So you can take product of two quantum codes or quantum code and a classical code or a quantum code and a manifold You can do a bunch of product like that And they they are good in the sense that they achieve a linear distance and Dimension and number of logical qubits, but they are not sparse enough. They are only Their weight is square root of n. It's not constant and The first one to beat this this square root of n In a clear way with more than a poly log is this hasting Ordinal result with fiber bundle. So it's it's another type of product, but twisted a little to squeeze more distance And you can ask Ryan about that and then you are at n to 3 over 5 you are beyond the square root and It it led to a bunch of new result very quickly With all their type of product of lifted product that gives almost n and so one minus epsilon from Penteleev and Kalachev So before they are huge breakthrough. There was already a significant breakthrough not only one and there is Breckman and Hebehardt that Achieve this answers 3.5 without the poly log factor But maybe the most interesting construction is the most interesting point in this paper is that they introduced a general family of products That actually leads to the linear distance. It's it's something we can instantiate to get to get the good LDPC codes and Those good LDPC codes appeared with this this famous Penteleev and Kalachev result So now they have everything they have bounded weight linear distance linear k and There is a similar result using similar tools Applying that to local interstable codes by Dino Evra, Diffney, Lubotsky and Moses And the underlying complex the underlying object Is the one we are gonna we're gonna look at now and it's the same it's based on the same object And There is a more recent version of this construction that is maybe more accessible by Leverie and Zemore so my presentation will rely on the notation of Leverie and Zemore and Very quickly people figured out that now that we have good LDPC code. We need good decoders we need to achieve the full potential of the code and Within one day, there's three papers that appear and they the three of them propose a similar decoder It's a generalization of the classical expander decoder. So Everyone knew that it was a question we should look at so Yeah, if you look at question that are trendy you should be fast. That's the moral of the story Okay, yes, so those decoder do always work. So you achieve a linear distance I'm not saying all decoders are your stick. I'm saying Often I mean right now at least we don't have both a decoder that performs very close to maximum likelihood decoding very close to optimal decoder and That is probable It took a long time classically to obtain that with BP And I I don't think we even have a candidate for that today and I'm going to discuss also why later the issue is that We cannot even check if we have optimal decoders and there is of course a lot of references that I didn't mention And a lot of work in between Okay, so now I'm gonna I'm gonna describe this construction So what what's the input for for constructing this object It's a two-dimensional object such generalizing a graph And we build it. It's called Kelly because it we build it like a Kelly graph from a group And we need two sets a and B two subsets of the group such that each set contains the inverse of their elements And the output is a complex So when I mean a complex, I mean vertices edges faces. It's a two-dimensional complex. It's a generalization of a graph We just need to add faces And how do we build it? We build a vertex set by taking Two subsets so it's a bipartite graph. There is two Vertex sets and each of them is a copy of the group G So G times zero G times one we have two copies of the group. It's our vertices Now how do we connect them those vertices we connect them by multiplication? Using this group as these elements of the group so I can multiply in two ways I can multiply on the left side multiply by a on the left side and When I do a multiplication, I flip the second bit. So my bit is counting the number of multiplication I do and I Can multiply on the right side by B So B will correspond to right multiplication a will correspond to left multiplication And by doing that I have two types of edges. I will call them a type and b type And now I need faces my faces will be squares And the squares correspond to Starting with the group elements you multiply by a or You multiply by B. So you get two edges when you have two edges. You are almost there you just need to multiply The one you multiply by a first by B and then you get a G B That you can also reach from G B by multiplying by a It's probably not very clear right now, so let's look at what this complex look like So we start with a vertex G and a vertex is connected to two types of vertices right there is vertices with We obtained by multiplying by an element of B on the right side So I have B1 B2 and all the elements of the set B and I have vertices obtained by multiplying by an element of a on the left side That's my that's my two types of vertices and each time I take two of them I can build a third vertex By doing both multiplications So if I take a1 G and G B1 I Can I can go from these vertices to a1 G B1? Which is back on the left side and These four vertices form a square. That's the complex It's clear right? We see well what it looks like and My edges also are diagonal here, so maybe I'm gonna I'm gonna place the vertices in a different way to visualize it in a better way You see that those things should not be here. They are not they are not proper edges My edges are those diagonal things That's that's the boundary of the square. So I twisted my square So that was my first attempt Okay, let's look into detail at the structure of this complex Exactly, that's what I mean by the fact that I twisted my my square So I'm gonna represent it in a different way now to make it more clear because We don't see anything in this picture Okay So now we have all three sets three finite sets We start with our vertices that are two copies of G. So it looks like that. Where are the edges? Sorry They are bipartite, so they are connecting those two copies They are gone yeah Okay, so and the squares so this time I represent it in the right way and When I represented when I untwist my square I get not zero on the left side and one on the right side I get Multiplication by a is horizontal, so it's gonna be red for multiplication by a and multiplication by B is vertical So that's the way I should I should represent my square right? It's it's more clear So we have two types of edges. I multiply by a on the left side or by B on the right side okay Let's look into more detail into the neighborhood of a vertex So when I pick a vertex G zero and we're gonna try to see what what surrounds these vertex How many neighboring vertices are there? So from G zero I can do two things multiply by a or multiply by B. I Get two families of neighbors That are indexed by a or by B So I get cardinality of a plus cardinality of B neighbors Okay So there is some additional condition that they use that guarantees that they are not the same We will need that and that guarantees that we have the right number of faces around vertices And so each time I claim that those those vertices are distinct. It's because of this condition Yes, I each time I do a multiplication. I flip the bit so all the neighbors of this one will have a bit one And if I put a bit one here all the neighbors will have a bit zero So it's counting the number of side the number of times you change tide Okay, so the neighboring faces if I start with a vertex to build a face We said we need two vertices. We start with two edges when we want to build a square We start with two edges and then we take the parallel edges, right? So we can do that we we start with a and B It gives us two vertices two directions and the last vertex is obtained by combining them So the number of faces we have is given by Carnity of a times cardinality of B again, we use we need this this condition It's called it's called total non conjugacy non conjugacy condition and I'm gonna state it later So now we can represent this thing in an abstract way It's not real faces, but it's very convenient to say that this thing f of V this set of faces incident to V It's basically a square grid So we have we have a bunch of face and each time I pick an element of a and an element of B I have a square and It's easier to visualize than the previous structure where everything is folded and So we're gonna use that Okay, now, how do these? Square grid intersect if I take two vertices I Want to know what's what's the intersection of their neighbors because I will have to check commutation relation later It depends on this the way they intersect. So I'm gonna take two vertices and To make sure they are if they are neighbors one is of type zero one is of type one and So what is the intersection of those those two sets of faces? First if the set is not non-empty then they must share an edge is that obvious to you if those two vertices Share a face Then they must share an edge It's not always true on a square, right? You can have two vertices in the same square, but they don't share an edge if they are on opposite sides In this case, it's true because those vertices have type zero type one So it's the opposite vertices of the same type So they share an edge now there is two cases either. It's a type a edge or type b edge So let's let's consider the type a and it's gonna be the same for the other one So for the type a we have two vertices that are of the form g zero a g one like those two horizontally so All the faces are all the for all the faces that contain both vertices are of the form They contain a g and they contain g So we take an extra b and we get a vert a new face So for each b we get a face that has this horizontal edge and a new vertical edge So we just need to pick b to pick a vertical edge and we get a square So what does it look this set of faces? What do you think it looks like in my in my previous picture? Yeah, okay, great. It's a column, right? It should be a column Okay, and what about the other set if I if I have if two vertices share a type b edge error. Yeah Okay, so now we have our intersection. So we want to make sure that When we define when we are going to define a quantum code We're going to place cubits on faces and want to make sure that when we define stabilizers stabilizer generators They commute on those rows and columns. So we are down to checking commutation on those subsets So we need to define a stabilizer that satisfies Some some constraints some commutation relation on these sets And to do that we are gonna we're gonna use a family of classical codes here It's classical and it's a tensor product of two codes a tensor code We take two classical codes and the tensor code is obtained by taking all the bit string That satisfy that we put in in form of a matrix So instead of this long big string you you just cut it make it a matrix It has n1 times n2 bits and we want each column to be in C1 and each row to be in C2 So I can give you an example if you take your first code to be Your second code sorry to be the the hamming code This is a codeword for the hamming code and the first code to be the repetition code Each column is clearly in the repetition code each row Is a vector from the hamming code So it looks like that and we can sum all of them to generate new vectors And the dimension so yeah, this is in the repetition code and the dimension of this code is k1 times k2 So these these objects that are very symmetrical are the generators are the Yeah, it's a generator of the space of the code space Okay, now we have all the ingredients we need to define to define those codes The the TNC condition is here So we need we need this condition And to define those codes are on those squares. We're gonna start with a group We take two subset A and B and we build the complex we built before and We take two codes two classical codes see A and CB and Can you guess what we are going to do with these codes? Sorry Yeah, my previous slide was about the right thing, right? So we have a square with That looks like a place where we could put a tensor code. So we're gonna take the tensor of those two codes They that's why they are chosen with the right length And we're gonna place a qubit on each face on each face of the entire complex So you can look at this as a as a neighborhood of a vertex But there is many of those neighborhoods and they overlap it doesn't really look like a square It looks like a very complex mess of squares But we can group them and look at a few of them together and they look like that and How do we place a generator? We take a vertex V? we take a codeword in this tensor code and We define a generator by replacing the codeword in this grid replacing all the ones by X and We get a local generator this grid has a bounded number of of Cubits we get local generators and we get one generator for each generator of the tensor code So we get many generators in those small matrices The points that it's it's local right if the size of A and B is small It's gonna be a small Generator and we do the same thing for Zed, but we add an orthogonal complement for CA and CB So we start with a vertex we take the neighborhood of a vertex a set of qubits incident to this vertex and We take a codeword of the tensor product of the dual codes and we replace one by Zed in this grid The only difference that instead of CA we have CA orthogonal And you know why it's because if I have two generators Two vertices such that they are neighborhoods overlap Then the generators will overlap and want to make sure they commute and when they overlap We want to make sure that if they overlap in a row in X. We have a vector of CA and Oh, sorry in a row in X We have a vector of CA and in Z. We have a vector of CA orthogonal by this definition so the X and Z part will commute and This is the construction So the construction is not that complicated it's It's possible to implement it now you can run some simulation and look for these codes The issue that in this paper there is no examples, so I I want I wanted to show something simpler than that So yesterday it was a bit too late, but I tried to look for the Toric code And it was a bad idea But I'm gonna start with a few example everything. I tried to build to understand better of this construction So first let's start with Examples of Kelly graphs. So the Kelly graphs are those graph we build by using the multiplication as a group Edges correspond to multiplying by an element of a so if we take the simplest group you can think of Zed so it's a group for addition and We take plus or minus one it contains the inverse of the set, right? Plus one is the inverse of minus one What do we get? What do you think is a Kelly graph? So how many vertices do we have? Yeah, I'm finished. Okay. That's a bad question What do they look like those vertices? Sorry, so let's place them like we like we will place numbers So 0 1 2 3 4 5 and so on and it goes also minus 1 minus 2 How do I connect them in my Kelly graph the connection is given by this set? Yeah, okay. It's a line. Thank you Okay, so now you understand Kelly graphs, right? Okay, let's try a few more if I have that square this time that square and my set is plus or minus 1 0 and 0 plus or minus 1 Do you know what it looks like? Agreed, okay, cool This one so it's the same but I added one vector in my generating set plus or minus so 1 1 or minus 1 minus 1 Diagonal so I add diagonal edges. Yes So this set does not have to be independent it doesn't have to be a basis What if I take Z 8 so Z mod 8 and This set plus or minus 3 a circle, how do you know it's a circle? Okay, so if you keep adding 3 you're gonna come back. You're gonna go through all the elements. So it's a circle Okay Do you understand why we need a equal a minus 1 sorry it creates separate graph Maybe that's one way to say it So when I go from ear to ear It's undirected. Yes, I can go from this vertex to this one or from this one to the other one by multiplying by the inverse So I need the inverse because there is otherwise I need to make it directed Okay Now we're gonna take two copies of a graph. So let's do it. It's called a double cover It's a graph with two sets of vertices g times zero and g times one and We have two sets of edges. There is each time there is a bit that flips, you know like in our previous case and Is there we multiply by a From a zero vertex or we multiply by a from a one vertex So the two types are kind of the same. They are all given by multiplication by a What does it look like if I take z and my generating set is not anymore plus or minus one is plus or minus two So first, what is my vertex set? I'm sorry Also, so my vertex set So what is V here? Okay, so we have g times zero and another copy That's and it's infinite, right? I only had space for a finite number of vertices here So what do we put edges now exactly so I see you guys doing like the right thing Okay, so if we have if we are not in the double cover, we will do plus or minus two so it will look like that, right? And instead we go up and down so we do the same thing but each time we change copy So we go up down up down up down So it's it's why it's called a double cover It's a cover in the in the topological sense, so it looks like that Okay, now let's do our first example of Left-right Kelly complex So the first one I try to draw I do it like that It's not very clear what it looks like it's that six so we have Z mod six on the left and on the right and we connect them with plus or minus two and The square looks like that. It looks like four edges But I could not see much with that So I tried I tried something else. So I I wanted to recover the Three by three Toric code. So for that I need nine vertices And with two copies I'm gonna have 18 vertices. So I have vertices for X and for that So my first idea was to try that nine and it's because it was late. It was not a good idea So I'm gonna I'm gonna draw my my edges in two direct in two colors So I'm gonna do when I when I start I'm in the group that nine But I'm gonna choose arbitrarily to do plus one on the right minus one on the left that so red correspond to the set a and The type B edges they will be in blue and I do plus or minus two in blue I wanted two different types of generators or otherwise you get just multiple copy of the same edge So that's why I picked this one and not the same one a different one And those two are co-prime with nine so they should behave well So now let's try to draw it we start from the vertex zero zero We can originally we can add one or remove one to the first coordinate And the second bit flips so the second bit is one for all the neighbors and the first bit is plus one minus one or plus two minus two And after I I kept adding neighbors until I see the same vertices again I have no more new vertices and I want to know what it looks like It was pretty good. It looks like like a square grid. It looks like it could be a torus And you see that I stopped here because the vertices start reappearing the six one is here, too So zero one is here. So I don't get any new vertices. I'm done and some of them are duplicated so we need to identify them and Our hope will be to identify them and find a torus So first if we look at this side six six one five one four one three one It's exactly the same as the other side. So those two sides are identified on on this side. It's pretty good It looks like a torus But now let's look at the other side and there I was confused So three one should be here, but it's here and zero one should be here But it's here. So it's not it's not properly done, right? Do you know what is this object? Sorry, it's not a Klein bottle. I Yeah, I think I heard the answer so it is a torus, but it's not the one I'm looking for So it's a tourist where first the blue sides are identified. So you you identify them you get a silender And after what you want to do is just to glue the opposite side of the silender But here when we do it We glue this one here This one here this one here which is here So we shifted a little so we we glue opposite side We have a silender with and when we are about to glue it together we turn and glue So we twisted a little level our silender That's almost there, but it's not the one I wanted Okay, so let's try a better example we are looking for the toy code, of course, it should be ZN by ZN So let's try with this one same so the generators are easier to pick I don't need to think should I pick something co-prime or not. I will just pick one direction and the other and same thing same Graphical representation so this time we have three coordinates. The last bit is the the bit flip zero one and We start when we start from zero zero zero we add one to the first coordinate or we We do minus one to the first coordinate, which is mod three and Same thing vertically we do plus or minus one to the second coordinate and plus or minus one mod three And when we draw the graph we get that and this time it works. It's a tourist This side is this side. So we are back to our tourists now that we have the tourists It should be easy to build the code, right? So Let's let's try So it's still the same example. It's a second part now. Let's supply our construction. It's summarized here It's the same list as before we place qubits on faces. So You have face. It's really those squares. They are proper faces in my definition in in our previous definition and for each vertex we take a codeword of the system so code and we define a Generator on f of V. So let's first look at what f of V looks like we still haven't defined those codes But we need to define them on f of V So we take two types of stabilizers. I put in white the vertices with a one on the last bit and The black vertices are the zeros so my my X generators goes on the black my Z generators goes on the white. So let's do just the X generators. So F V It's a times B. Do you know what it looks like? So based on the size of a and the size of B. Can you tell me how many squares there are? For as a has two elements B has two elements. So we have something like that now we want to put a tensor code that means codewords horizontally and vertically So we need a code CA and CB with lengths to Do you have any idea what I found? Yeah, let's do a repetition code. There are not that many codes with this length So let's pick this code. There is two bit string zero zero or one one So our codeword for the tensor code will look like that either as a full zero or the full one And now it started to look good Because to build the gen the X generator we are gonna build we are gonna replace one by X So we get X X X X And we place that on the black vertices This is one of our generator of our quantum tanner code and For the white vertices we do the same thing we end up with the same square and we put the dual of this code So the dual of this code is itself Each vector one one is orthogonal to itself And we get the same thing, but with that and when you do that they commute So we are back to our Torah code. This is exactly the Torah code So I found the most complicated way to build the Torah code But it does better than that and I'm not gonna go through drawing The the case of good LDPC code because it's based on the group PSL 2 of fq to the I Which is a bit more annoying to draw and you pick two random codes That's how it's built These things comes because they are good expander. Yeah, yes Do you mean that we need to have the same size So see a perp as the same codelinks. Do you mean codelinks or code dimension? So the number of bits of the code is the same. Yes By see a perp I mean the set of bit strings that are orthogonal to the bit string of CA. I am struggling to hear you I'm sorry the element of CA perp Yes, they they must be square. You mean you must have size of a equal size of B Yes Yes, they they so the codelinks of CA orthogonal is the same as the codelinks of CA So it's bit string with the same length right I'm not sure I understand what So the Yeah, so the length of the vector here and here we'll have length a size of a So it's always gonna fit But but that being said they they use the condition size of a equals size of B This is the case here, and I don't know if it's necessary all the time. I don't see where it's necessary right now Yes, K can be different, but n is the same. Yeah So it's a as a special linear group projected quotient it by the Multiple of the identity and it's in the finite field with Q to the high elements So it's Sorry Q is prime Q is the power of a prime. Yeah, you take a random code Yeah of lengths size of a they adjust the dimension of the random code as a function of the proper Quality of this graph of the expansion and the degree of this graph What is? A and B so here in this graph what I didn't write it down but there is a set of generators and It's it has a fixed a bounded size. I didn't put it here. Yeah, it's specific It's specific to this construction And the fact that it's specific materials Because it makes it hard to use that means you have a lot of constraints on the construction of the code itself You cannot take any group you want But it's fairly simple you take this graph that has expansion and two random codes and They are tensor product will be robust and Expansion will spread the robustness to the entire code. That's a basic idea And that's how they get good quantum LDPC codes So should we replace our code by these codes? I want to say yes, I would like to but Even in the paper. There is no examples And it has been two years. I'm not I'm not aware of any family that outperform hypergraph product codes for instance Yes Yeah, yes, you grow I so that's another issue that the family is fairly sparse But we also have this issue with hypergraph product codes where the family grows as a square so here it's even It may be exponential. I'm not fully sure to check But it's it's not a very dense family if you say I want a code with length 1,000 or between 1,000 and 1,100 it's not clear that you are going to find it from this family Yeah, that's true. That's a good point. You could do some operations on it combine it with other codes Yeah, but you wouldn't degrade the performance. Yes So the smallest code in some sense is a four by four for the four qubit to record or surface code In this family, I don't know what the smallest code But from what I understand there is no small example that has a good distance that is none There's one issue is that computing the distance is NP hard So if you want just to verify that the distance is large, it's going to take you a lot of time They are maybe other group, but based on the classical literature It's not that simple to build expender codes that are optimal to build Ramanujan graphs It's not very flexible so Yeah, I'm I'm skeptical that we find a lot of examples But we may find example by this construction that are maybe not good quantum LDPC codes But that are better than what we have right now Yes Yeah, I think they have the same issue as hypergraph for the codes. Maybe it's even worse The depths of the circuit may be even larger, but we already have this issue with any any Random graph any random tanner graph any Expender code will have this kind of issue So if we want to build LDPC code, we will need something. We'll need some kind of long-range connection. Maybe I Mean I'm I'm just claiming that I haven't seen papers giving me those codes that I could use so It's not that easy to find I'm sure we'll find some maybe it's just that it's hard to check that they are good and maybe they are there But we need a lot more To understand a lot more these codes to Yeah, there is a lot more work to do and building explicit families He's still needed when we take a random code. It gives us asymptotic properties. How far do we need to go? It's not necessarily clear Yes Okay, it's not too bad to a hundred or two to the one hundred. Okay, that's bad That seems very bad. Yeah But it's gonna get better It can only get better Okay, so now I want to conclude with more of a high level discussion About to get back to the scheme. We tested numerically we implemented we have Codes decoders the center of extraction circuit Proposal to implement it if someone gives us a hardware over the next few years, maybe the next decade, let's say So we have all the components But let's talk about what what may break down what we don't have what what we haven't done and So I showed you those results There is something I didn't insist on in this table. It's I told you We didn't have much time. So I told you just we have a 15x improvement in number of qubits But to get this 15x improvement, you need a high paragraph product code that encodes 906,000 physical qubits in the same block So now you have one block with 900,000 qubits. It's pretty big and that that will have consequences if want to build that Oh Okay, another thing is I I claimed at the beginning that I will tell you how to build a fault-tolerant quantum memory and I didn't talk about fault-tolerance at all So is it fault-tolerant? I In the simulation we assume that everything is noisy. So the circuit in some sense we proved that it's fault-tolerant numerically Is that a good notion of fault-tolerance, I I don't know but it's good enough to beat the surface code So let's look at this circuit Do you know what this circuit is doing? So it's a maybe I have a specific notation. I not control x for a c-not control z control z control x and This x z z x you have seen it It's the first stabilizer generator of the five qubit code So it's measuring a stabilizer generator and the circuit we use to measure in our protocol look like that We have one encela qubit a bunch of control gates and a measurement Is that is that a good circuit? Is it possible that there is a bad fault in this circuit? My issue is what's happening if there is an x on the encela qubits and Then it's gonna spread this x will be copied as a z on top and as a x So I start with one fault. I end up with two faults in my code and I cannot correct two faults So I will be in trouble. I cannot correct two fault with the five qubit code So we can we avoid that and we have an answer for that sure gave us the answer and That's using cat state we use a cat state if we can build this state Instead of using one and see that it's like an encoded version of this and see that of this plus state We can have control gates acting on different qubits and they will at most propagate a fault to a single qubit We don't have the same issue anymore This is fault tolerant. It's better, but we didn't use that We didn't use that in our scheme because it has a cost it costs some extra qubits And even though this one is gonna spread Spread errors. We use very large codes with very large distance. So the effect is reducing the distance It's it's not fault tolerance in the sense that our distance is not optimal the circuit is degrading the distance But we use fewer qubits So it's it's a trade-off that can still be advantageous in practice in numerical simulation at least so then the second thing I want to emphasize is We use BP to correct errors and I told you BP is correcting a single error a single qubit Looking at the neighborhood, but it's not taking a global decision Do you think it could be an issue? let's say I I send you some data I send you a Picture and I will correct each pixel independently I will correct one pixel at a time and for each pixel I take a decision independently of the others So we can make it worse. We if the noise is too high, but let's say the noise is low enough. Yes So that's true. There is correlated noise. I'm gonna mention that later but what I won't say is something slightly different that The BP is not correcting BP is returning something that is Correcting some bits, but not all of them So some some of the qubits will not be corrected But it's not an issue because we can never correct all the qubits. There is always noise in the fault-tolerant regime As soon as we wait for One cycle there is new new errors that occur So the fact that BP does not correct all the qubits globally is maybe a good thing Because it makes it fault-tolerant It makes it robust to incorrect information and that's why it seems like a good a bad idea to use BP to use the Classical BP because I told you it doesn't work well, but it's very local which make it that it doesn't spread errors too much And I think that's why this decoder does work well for these simulations and the last thing I want to say that we use noisy data and It can be an issue as well the syndrome we use is incorrect But again because we use BP Incorrect syndrome is going to affect only a small region a small number of qubits and not too many of them So BP is helping us to fight this noise And what I want to emphasize with that is that Fault-tolerance in practice is not necessarily the same as fault-tolerance in In the theoretical world where we want to achieve the full distance It's a good goal in theory in practice. We cannot necessarily do that. We don't even do that classically And we cannot even check that we achieve the full distance Because it's NP hard to check So that's what I want to emphasize with with this and that's why I didn't even talk about fault-tolerance It's practically fault-tolerance in the sense that it does beat surface code. It's cheaper But it's not fault-tolerance And it's a good news that also means that we can improve both the syndrome extraction circuit and the decoder in future work And increase the gap with surface with surface codes So maybe I'm going to conclude with what we can improve And and the least is pretty much everything that's what could make us optimistic Is that we can improve the code we can replace it by linear distance codes We can reduce the code lengths. We have 900 000 for the biggest one We can build a denser family our family grows as With lengths is 25 s square We would like something maybe linear We can improve the circuit. We just discussed that we can improve the decoder To achieve a better logical error rate to have a linear complexity and hardware optimization in the long term Decoder are highly optimized classically And we can also improve the simulation Even our estimate of the logical error rate Is an issue To estimate the logical error rate, we need to know if the decoder succeeded So we need a perfect decoder to tell us if The quantum information is still there And this perfect decoder does not exist We don't have it. So we use an approximate decoder. That means that we add some extra noise Our estimation is very pessimistic And we should simulate a longer lifetime for our circuits Okay, and I There is a lot to do with quantum computation And I will I will not go through everything but a lot of things could go wrong with this approach correlated noise non-parallel noise We need the hardware. We need the qubits to be at 10 to minus 4 We need smaller blocks. The decoder can be too slow Full tolerant operations are not clear We don't know how to build the hardware to build those long range connections as far as I know there is Nothing close to that And we need many of them and we need them to be insulated within planar layers So there is a lot of things that that could go wrong And I'm gonna I'm gonna conclude here I'm gonna leave you that with other cards we could consider Thank you