 Okay, so for for the second lecture, I'm gonna start with what we had at the end with our examples from the end I'm gonna modify one of them a little bit And I'm gonna I'm gonna discuss again hypergraph polycodes and after that we are gonna move to Center of extraction circuits So it's gonna be less technical. I don't need all these definitions anymore that we used For stabilizer codes, it's gonna be more graph theory and And pretty graph so you don't have to think too much. You can just look at pretty pictures And again, let me know if something is not clear or feel free to jump in if you have any question and At the end I'm gonna show you what kind of performance we can get with LDPC codes and I'm gonna try to convince you that we can we can beat surface codes under some conditions Okay, so first we saw the hamming code last time So this is a classical codes with seven beats and three linear checks So it encodes four beats of four logical beats And it can it has minimum distance three it can correct one bit flip and To correct a bit we measure the syndrome we measure the value of these checks We get a value zero one and when we have a non trivial value It tells us that there was a bit flip somewhere we can correct it And what we measure this syndrome beat is the parity of the incident beats of the four incident beats And in the quantum setting we can we can draw a similar diagram, so this is a tanner graph Except that instead of one type of checks we have two We have checks that correspond that detect x errors and check that detect z errors So I'm gonna put in red the checks that detect the red x error and what we measure here It's not anymore exactly a parity What is it this beat? How do we get it? single measurement syndrome measurement yes, and What kind of operation is that it's not any more a classical parity? What I mean is that it's a measurement of a parli operator So it's a quantum measurement and it matters because here we know how to do that in the quantum setting We must implement a quantum circuit to correct errors And when we have a z error We have to do the same thing. So this one. Do you know what this is measuring? it's it's Detecting z errors. Yes. So it's measuring x on the incident cubits. So x 1 3 5 7 So I'm going to use this this graph to represent a code So I don't need to give you a list of stabilizer generators. I just need to give you this graph And you have two types of generators for x and z and we can get a syndrome out of these measurements out of this quantum measurements and correct errors and The way we build codes last time I think something that was maybe a bit confusing for some of you is that we started with a cellulation and Then we use kit I have construction to build a code to define cubits and stabilizer generators But this cell this cellulation is just an abstract object. It's not the real world The cubits are not really placed in a three-dimensional lattice We can place them wherever we want once we have this cellulation and today we are going to move them and Place them in the right way to be able to implement this code to implement these measurements So after from from this definition of the code the cubits and the stabilizer generators We can build the tanner graph and we are going to work with the tanner graph only we can forget about the Initial manifold the initial cellulation in some cases. It is useful Especially when it's something planar 2d. It's that means the code will be naturally planar But in some cases it's just an abstract object and another type of abstract object. We use is the product of two graphs It's hyper graph product codes. We take two tanner graphs We take their criterion product and based on that we can define Stabilizers and cubits so we can we can build another type of tanner graph But this initial structure is just a combinatorial object for the definition. Okay, so now I'm gonna I'm gonna go from the toric code so from yes Exactly, so it's the graph with cubits and checks and there is x checks. I'm sorry z checks and x checks So we saw the toric code last time and we saw that we can place cubits given a simulation we can place cubits on on edges and stabilizer generators and faces and Vertices and we always get commutation relation between all the commutation relations We need between those generators so we can build a code The issue of the toric code is that it's hard to implement. I don't have a torus in my lab usually I don't have a lab, but if I had a lab So instead of that we want to plan our version and this is the surface code. This is the most popular code today This is the code that many many companies implemented that Google implemented recently And and it's great because it's plan out. It's it's very easy to implement because it's plan out and The construction is exactly the same as Kitayev's original construction except that instead of a torus you work in a region of the plane So you you have no identification of opposite sides you rotate the lattice to save a few cubits It's a bit cheaper in terms of number of cubits if you rotate it, but it's just a technical detail and You you remove the corners also for technical reasons So once you have that you can define a code like last time You have your lattice you are going to place cubits on edges So you have white circle that are cubits How many cubits do we have here? 25 so it's a It's going to be a distance 5 surface code and it has 25 cubits the one you add in the problem set at 50 cubits, so we save a factor 2. That's why we rotate and We define a stabilizer code on each vertex so for each vertex. I have a stabilizer code okay, I picked the wrong one and We define a Z stabilizer generator for each face and they commute We can see it immediately if we place a stabilizer generator Z on this face It's going to entire commute twice with the X so they always commute and they define a code and There is some faces that are cut on the boundary. They will define a Z check with weight 2 So it's almost like the tory code, but it's plan R. It's easier to implement And it's a bit cheaper Which one so for instance if I put a Z check here, it's Z Z so it's going to commute with my X, right? Which X above? Oh? Oh Yeah, I should have removed those vertices. Sorry Those are not here. Yeah, those would do not commute otherwise So that yeah, I should remove the vertices on the whole boundary Yeah, just like this one. I removed only the corner. I should have removed those Yeah, thank you And the tanner graph of this code looks like that It's it's a it's a square grid. It's our square grid of qubits, but there is Alternated X and Z checks inside and sometimes we represent it like that But I'm going to use the tanner graph version okay, and now Hypergraph product codes I'm going to go through the example we saw at the end yesterday very quickly So I'm going to go again through this example and I'm going to tell you then how we can generate a large family of Codes of high of hypergraph for the codes and how we generated a family that does beat surface codes So we start with two tanner graphs two bipartite graphs and first we define qubits We define qubits on the product of a circle and a circle So here there is nine qubits for now and We define qubits on square square. So I take the Cartesian product of my squares with the other squares And then I define a check for each square time circle vertex so I have an X check and To understand it's connected to some qubit vertically and some qubit horizontally The qubit that are connected to this check vertically are obtained by the vertical edges by translating the vertical edges So here I start with this vertex. I look at the vertical edges incident to this Check there is two of them connecting to the first and second qubit on top So I connect to the first and second and I put a X And I do the same thing horizontally using the horizontal tanner graph Is that clear and we do the same thing with that checks but with the other type of nodes With circle square and we connect also with horizontal qubits and with vertical qubits with horizontal with the horizontal tanner graph and with vertical qubits with vertical edges and We can guarantee that they commute they form a code and this code is better than kit af codes than the surface code Do you know why? Oh, yes. Oh, yes. Thank you. There is this this edge is missing Oh, yeah, I didn't take the right graph here. I modified my graphs and yes, that's right So horizontally, okay, let's restart the this this one is incorrect. This node is incorrect. So horizontally I have Three edges incident to this square. So I put the three edges Oh, no, no, no, no, sorry my check is here My check is here So from this one, I look at this vertex. I have three edges incident So I have Z Z Z So this part is correct, right and from these vertex Vertically it's connected to one of these two qubits and to know which one I look at this graph There is only one edge. So I put the same edge here. I translate it When I get a Z Yes, so there is a copy of this graph for each row Okay, thank you We need them for this construction they It's Yeah, it's it's part of the construction if you don't do that you end up with a classical code Which is a product code which is a standard family of classical codes But the way is a way to obtain the commutation relation was to use these two types of qubits You see that if you remove one of them The stabilizer generators do not commute anymore If I if I remove that Those two will overlap here on the single vertex on the single qubit. So they don't commute I need the two parts to have the commutation relations So a CSS code means that there is two types of stabilizer generators With only X operators only X or only Z. So this is a CSS code. Yes I Don't think it correspond to a standard family of classical codes. It's yeah, it's not as useful classically So the point is that all the good quantum LDPC code will not be useful classically And it's even worse than that. They are terrible classically Why because the X and the Z commute, right? So let's say that you find an application somewhere where they were using the Z part as a classical code But you know that this I wait three operator Commute with the Z part It means it's a codeword. So the minimum distance of your Z code is three as a classical code Because it has something in its dual with low weight It's a poor classical code and that's why each time I'm going to show you some CSS codes that are that are useful in the quantum setting They will be new we need new constructions because the classical ones cannot be used. So, yeah, that's a fair question Yes Yes They do so they correspond to taking the product of two one complex and Those two one complex are defined by a hypergraph each Which is the telegraph I didn't want to talk about homology. So I will not explain Yes when oh This oh Yeah, it's my it's my input you can give me any pair of graph and From any pair. I'm gonna give you a quantum code Exactly one vertical graph here one horizontal graph here. Those are the two input from that. I will build a quantum code Great. Thank you for all the questions Now I have a question for you. So what is n in this case? 11 great. How do you know that? perfect Okay What is the first stabilizer generator? So I need to tell you a little bit more I I need to tell you how I will label my qubits So I'm gonna do one two three four five six seven eight nine ten eleven Now the first stabilizer generator. Let's say it's x is this one four seven Ten okay, so it's something like that. I can do that for each square circle and circle square and I will get Six plus three stabilizer generators I can build a stabilizer matrix that we were using last time, right? Or I can build the tanner graph So it seems that now you understand how to to build these codes So I can tell you what are their parameters? So the input is two two graphs to tanner codes to tanner graphs or to classical codes It's the same thing right a tanner graph or code and the output is a quantum code with parameters are given by The parameters of the two classical codes So n is the length r is the number of checks So it's number of bits times number of bits plus number of checks times number of checks. It's what we were seeing earlier K is obtained by Counting the number of independent stabilizer generators. I will not do it and We can obtain a lower bound on the minimum distance Which tells us that the distance of the quantum code is basically the smallest distance of the two classical codes so you have a guarantee to have a large distance and There is also an upper bound that tells you that you cannot do much better than that So basically what you get is that if you select random sparse graph If you can select if you select sparse graph you are going to get sparse stabilizer generators you get a quantum code that is LDPC and You get K, which is proportional with n In the surface code we had K equal 1 So this is much better Yes So those are the transpose the distance of the transpose code. It's a code obtained by swapping checks and bits so it transposed the parity check matrix and So we get K proportional with n the number of logical qubit is proportional with the number of physical qubits So it's very high and a distance which is square root of n which is similar to what we saw with the toric code always the surface code so we get the same distance but Many more logical qubits and this is what we what we want now when we when we Did the simulation to check the performance we had to build an explicit family So we did that by selecting random random tanner graph random input graphs So we select two random graphs and we are going to select graphs that are sparse with four s bits with degree three and Three s checks with degree four. So we fix the degree There is one quarter one quarter of the bits that are that are logical bits with these conditions and what we get from that is a Minimum distance that grows as 25 times s square s You can see it as a parameter the size of the code. So we have a family of codes and K is s square and The degree of the generators of the stabilizer generators. It's is seven And the degree of the qubits is six or eight. So we have a sparse graph a sparse tanner graph We have low weight measurements to define our code and we have a large K So this is the weight of the measurement we implement the degree of the checks and The degree of the qubits is how many generators are measuring the same qubit we don't want one qubit to appear in all the generators and to summarize we have the surface code which encode one logical qubit and does wait for measurements or we have The hyper graph for the code this family that I just described that encode and Over 25 logical qubits So as soon as you are beyond this size beyond the distance five surface code You are you are getting better You have a better number a larger number of logical qubit than with surface code So you you will quickly outperform surface code If we can decode them and measure the syndrome and if we can do all of that right now, it's just the intuition We want to use that that we have many logical qubits. Yes It's not necessary. It's by construction This qubit is connected to a number of checks that comes either from horizontal vertical connection or horizontal connection So if you take your two input graphs to have bounded degree The degree of the qubits will be bounded as well Yes, oh, sorry. How is from the previous slide I didn't give you D here and the reason is that I don't even know it okay Using the previous formula. So here. I know that there is three s bits and For s bits and three s checks So one quarter of the bits are unconstrained. So K is at least s So K is s at least it's it's actually a lower bound In general we we get exactly s, but we get at least s square logical quits We can get a few more if we're lucky I first here We're gonna see it with syndrome extraction circuit if you need to measure the same kid many times You will have to have a large depth circuit First there then You are going too fast. I'm gonna answer in a few slides. Yes So it's not It's not not because of K, but because of D Because those codes cannot produce good LDPC code D that most square root of N. They are not good But they are good enough to outperform surface codes And it's a good thing that means that even using code that are sub optimal now We have better construction since last year So even using code that are not optimal. We can beat surface code So if we plug in the best the best code the best decoder the best Circuit that we're gonna find in the next few years. We're gonna keep improving the gap Increasing the gap with surface code Okay, so yes now what I want to answer is how many and see like qubits do we need? How many extract qubits to measure and for now I haven't told you even how to measure that so we we cannot answer it yet So we're gonna talk about syndrome extraction circuit and Before that I want to emphasize the difference between classical and quantum error correction Because if you are if you are familiar with classical error correction, you have never heard of a syndrome extraction circuit classically have you and There yeah, it's trivial Computing a parity So you you start with a bit string You start with a bit string encoded in some code So at the beginning the checks all have values here, and you're gonna send that through a channel Through some communication channel through your cell phone There is some bit flip procurings Like one bit is flipped here bit for is flipped and then you can you can check you receive that in your cell phone You can do an accurate computation of the value of those checks, and you can see that one of the check is violated To do that you need a very reliable machine a classical machine capable of computing parity It's not that impressive, but we don't have that quantumly. So It's useful and It's interesting that in order to run your correction. You need to have a more reliable machine and using this syndrome you can correct Now what's happening in the quantum setting? You start with a quantum state that belongs to a stabilizer code. So it satisfies the quantum checks there is a poly error occurring and We receive some some quantum state with an error. We can also compute the checks We need to compute the checks to measure the stabilizer generators and for that we need to have a reliable outcome Same thing for the X check and then we apply a correction Do you see what what's the issue here? That's one issue. I must measure all of them quickly enough Yeah, I'm telling you we're gonna build a quantum computer. Can you give me a better quantum computer so that it can be build one? It's gonna be an issue, right? so That's and that's why we don't have that classically it's classically we we have reliable machines And we know that at some point in time We will be able to we will be able to to build reliable machine I'm not sure if today people believe that we can reach the quality of transistors Have a noise rate of 10 so minus 20 with some kind of quantum hardware I don't think we are going in that direction people believe that there will always be a lot of noise and Because of that this thing does not exist and you and will likely not exist so instead of that We need to use additional noisy qubits. We are gonna add qubits We're gonna add a quantum machine, but the quantum machine is as noisy as our channel And that makes everything much more expensive. That's why quantum error correction is so expensive So let's do it. Let's measure one check a single check wait for I have four qubits and one check. I'm gonna use one extra and see like a bit To measure this check and the circuit we use is prepare a plus state implement a sequence of 4 c-notes Connecting to the four qubits and measure the nc like a bit so for each check for each square We are gonna add an nc like a bit So our sequence looks like that it follows the edges of this graph So the tannograph is convenient to we can implement a circuit directly on it And now what's happening when we implement this circuit there is some noise and I'm gonna describe the three most common noise models The first one is the one we have in the classical setting only the qubits are noisy my syndrome extraction circuit is perfect But as we discussed it's not realistic. We don't expect to have the ability to do that The second noise model is to say my circuit is good, but measurement outcomes are noisy So each time I read a syndrome value. It can be flipped with some probability p It's more realistic. It's good to get some intuition But what's what's the most realistic is actually that everything is noisy Every single operation even waiting if the top qubits is waiting three times. It's gonna accumulate noise for three time steps And it's terrible. That means that most of the noise is not those four for input qubits Most of the noise in using error correction itself And that's why most of what we do in in the quantum computer in a theoretical quantum computer Is running error correction? Computing is easy. We we really need to run error correction because we introduce a lot of noise during error correction So I'm gonna consider this noise model Now I told you that I Told you during the first lecture that we have we have a great family of codes. It's random codes. We pick random stabilizer codes they have a Large number of logical qubits and a large minimum distance better than my hypergraph for the codes So what why am I not using random? Random stabilizer codes Okay, let's say you give me a thousand qubits What I could say is that I'm gonna pick a random code inside this one thousand qubits And I'm gonna pick a random code with five hundred logical qubits It's easy. I just pick five hundred random stabilizer generators with high probability I'm gonna get a large distance So I will get a good cut probably better a better distance than my hypergraph for the codes and more logical qubits Okay, can I do that? Is it gonna work? Then I need to measure my five hundred stabilizer generators Should I do that? Yeah, why? so there is five hundred of them, but I'm gonna use codes with thousands of stabilizer generators Yeah, it's possible. So we proved in the first lecture that if we select a constant fraction of stabilizer generators You know, we've select half of the number of qubits then we are we're gonna achieve a linear distance So this is large enough. I'm pretty sure the code is gonna be good But the issue so there is many stabilizer generators to measure That's one issue, but it's not the worst is that what do they look like these stabilizer generators? They will spread the noise why? Yes on how many qubits do they act typically? So you select identity X Y or Z So three-quarter of five hundred it's it's it's a measurement that doesn't measure four qubits But that measure hundreds of qubits Now imagine this circuit with hundreds of qubits how many noise location you have You will have too much noise and that's why we prefer LDPC codes Sorry, I cannot hear you In general we could try that But we don't know how to do that We could try to find so it's the same question as I give you a linear subspace Can you give me a basis made with low-weight vectors? and finding Short short basis is difficult in general So that's a good idea, but we don't know how to do that Okay, so now I'm gonna move to surface codes because those random stabilizer codes not work So with surface code the center of extraction circuit. It's measuring x x x x so we have the same circuits We're gonna place it inside each face And we have a way to implement it on all the faces in parallel So the circuit looks like that I'm not gonna prove it but in six steps You can implement this circuit for all the faces and you get many syndrome beats simultaneously in just six steps And because of that The surface code works very well. It has a high threshold So I should use a threshold of 50% of the revision code with the surface code. We are done to 1% So it's a big gap and and this is because we need many additional qubits because everything is noisy and because error correction itself is noisy That's why we do have such a big drop compared to classical error correction And it can be implemented with 2d local gates You see that we only have gates in in those squares so square faces So we have a large grid of qubits like that This is realistic. It's a little bit smaller than what you need to factorize RSA So it's it's realistic for one logical qubit only. It's only one qubit and it's 1200 physical qubits So it's it's a lot. It's very expensive. I'm Error is in this circuit errors in the measurement outcome for instance the basic idea that we repeat the measurement multiple times and When you repeat you can correct former results using multiple repetition so One dimension that's missing here is that in addition to these 25 by 25 qubits you have a depth 25 in time So you need to repeat this depth 6 and our extraction circuit 25 times in a row to run your correction So the volume of one logical gate is huge It's hundreds of thousands of synods. This is crazy, but this is what we need for one logical gate Yes, this value. Oh one percent. It's yeah for circuit level noise So I say one percent because there is different noise model in the literature So it's gonna vary between one percent and point five point seventy five depending on the assumptions But yes, it's with with not the optimal decoder, but with a good decoder. Sorry So we also have to correct it, but because the correction is a parley correction. We don't really effectively apply it We just keep track of it Yes It comes from numerical simulation As far as I know the best lower bound is still in Dennis Kittayev-Lendal Preskill and it's a few orders of magnitude below that and since 20 years, there is no other bound No, they're lower bound. Yes It's gonna accumulate and can sell but as long as I keep track of What correction I need to apply I can delay it Well So the decoder is gonna take as an input the previous correction So you the decoder needs to track what you want it to do what you should have done to correct Okay, I don't want to talk too much about surface codes. I won't talk about the more interesting case So what about LDPC codes? What do you know? Do you know what the typical LDPC code look like I? Told you we like in our case we take two random graphs two big random graphs. We take their product What is it gonna look like? Yes, an expander graph. Okay So it's something like that It's an expander graph. Oh, it's a big mess And do you know what a typical quantum computer look like if if we add one? sorry like that more or less so it's Closer to that right and Now the main question is how do you embed this thing this horrible mess inside a square grid? And that's what I'm gonna try to answer and as you can guess it's not gonna work very well Okay, and what we're gonna do is that we're gonna start with this typical quantum computer We're gonna show that I'm not gonna prove it, but I'm gonna I'm gonna mention the result Is there we need a large depth or we need many and sila cubits? So we are in trouble in either case Is there we we need to use many and sila cubits to perform the several extraction Oh, our circuit is gonna have a large depth and because it has a large depth. There is a lot of noise that accumulates and we're gonna fail so in today it's not gonna work and Using these two cases what we observe when we simulated that is that numerically it doesn't work Either it doesn't work well because this circuit has a depth that is too large or has too many and sila The circuit is too big so too noisy and it rings our performance So we need to change something we need to Easier use a different code Like the surface code. We know where it leads. It leads to a big overhead Oh, we need to change our cubits our typical quantum computer. And that's what we consider here. We consider Using some long-range connection. What can we do if we have some long-range connection? And how many of them do we need and we're gonna design a layout based on a few planar layers? And we're gonna show that in this case with this assumption With this additional hardware that does not exist yet We cannot perform surface codes. So over the next 10 years, we should maybe try to build more long-range connections so first What is what is the best possible center of extraction circuit? I'm gonna build a center of extraction circuit I don't want you to tell me that I I build a poor quality center of extraction circuit And that's why I get a bad performance. So I want to make sure I build the best possible circuit so I need a bound and the bound we get is That the depth is at least some constant times the code length the number of data cubits Divided by this number square root of q and this is the total number of qubit used It includes my code qubit my data qubit and all the nc like qubits. I need So everything I need to run the center of extraction in the case of surface code q is 2n and We can consider two regime to understand this bound in the first case if we want a constant depth circuit To make this Constant we need to make sure that this lower bound is constant. So to make this constant. We need to have at least Q which is of the order of n square That means we need many nc like qubits to encode a code with length n We need n square nc like qubit, but I told you that's a great advantage of Hypergraph product codes is that they have many logical qubit per physical qubit So if I if I add n square nc la I lose this advantage I lose my i number of logical qubits. So I I'm not going to do that and in the second case we have If we have a constant number of nc la then q is proportional with n and we need at least a depth square root of n Because the depth is very large The circuit is going to be too noisy and he's not going to work So we I'm going to describe this circuit Which implements Any css and our extraction circuit in 2d and I'm going to discuss the result the numerical results that we obtained for the second case Okay, so let's try to build this optimal 2d local circuit So we saw how to measure one x check now We want to measure all of them and we don't want to make the circuit too slow. We want We want a constant depth so we want to implement all these measurements simultaneously How are we going to do that? We start with an edge coloration of our tanner graph We know that we want to apply c-notes along the edges So we're going to color these edges in such a way that they don't share vertices Once we have that we prepare a plus state for each check We prepare an nc la qubit and we are going to loop over the colors So I select a color and when I select a color I apply all the c-notes of this color simultaneously and Because it's a proper edge coloration of the graph I can do that So I can do that for all the colors and At the end I measure my nc la qubits and I'm done. I just measured all the three checks simultaneously What what is the depth of this circuit? Number of colors. What is the number of colors? degree plus one So here it's degree and We are lucky to have a tanner graph a Bipartite graph because in this case we can efficiently find this coloration otherwise. It's npr So we can find this coloration efficiently Run our quantum circuits and we are going to do this in a number of steps, which is the degree And or maybe by plus one you mean that we we need to prepare and measure So it's degree plus two if we can count both of them and Because our graph is sparse it's going to be very short depths. It's going to be a short depth circuit but now I Don't know how to do that in 2d. So we need to map this circuit in 2d. Yes You could change a little the shape of the local shape. It doesn't change the results So I'm going to assume a square grid and I'm going to assume only horizontal and vertical edges But you could add diagonal edges you could add an edge connecting to all the vertices at distance 2 It's not going to change anything the same result holds Okay, so now we're going to map this coloration and this tanner graph onto a square grid And you see that the square grid is giant The size of the square grid is of the order of n square. I know that I need that I have no choice I'm going to put the 7 qubit on top and the three and see that qubits in the bottom and Then what I would like to do is to connect each qubit to the to apply a c-not along the three green edges in this case green or black, I don't know So I want to connect this and see la number one to qubit number four So I'm going to put the number of the of the corresponding and see that under the data cubit And now I want to connect them. So I'm going to build a path using the odd event sorting network So I add the missing numbers in an arbitrary way and then I'm going to sort Odd pairs and then even pairs and so on so three and four are sorted already I don't need to do anything five and one are not sorted. I'm going to swap them So when I do that for all of them, I will start pushing one to the left side until it reaches the qubit number one in the bottom and Once it's fully sorted. I have a path connecting one to one So I can build my three path like that Now is that enough what I want is to use my my qubit along a path To implement a c-not between the bottom and the top we we know how to implement a long-range c-not in constant depths But there is an issue here. I want to implement those three long-range c-not simultaneously How will you do that? Do you see any issue? So let's say I try to implement the c-not from the bottom to the top I didn't describe this long-range c-not, but it's going to use all the qubits along this path, right? And in the same time I want to implement this c-not from the path three, but they are crossing So the path three is gonna have to wait that I'm done with these qubits to use them I want to avoid that. So I'm going to introduce switch. It's why it's called switch based I'm going to introduce a switch and the switch is prepared in advance We're going to prepare Bell pairs To bell states using horizontal operations. So I can do that then we're going to swap our qubits using vertical operations on neighboring qubits and we end up with two entangled pairs that are crossing and Those two entangled pairs can be used like like a really connect like a real connection in the graph The only difference that once I use it to implement a gate to implement a long-range gate. It's destroyed So what I'm going to do is that I'm going to prepare in advance those switches that I need and after that I'm going to implement my three long-range c-not in parallel using those Virtual connection that I just added They are destroyed and for the next color. I will do it again with a new set of switches So by doing that I can implement my long-range gates long-range gates in constant depths So for each color I have a constant depth circuit What do we get? Oh, yes I didn't say they were perfect. They are noisy But I have a circuit to prepare a bell state on neighboring qubits, right? So I'm going to talk about that right now Yeah, every single gate we are using is noisy even waiting so Now what we did is we looked at the first circuit I showed you with fully connected qubits and then we we didn't need any extra qubits and The second circuit it's not the one I showed you the one I showed you at too many qubits So it's not enough logical qubits But there is a second one that is also optimal But with a constant overhead that using only n and still a qubit The issue is not the number of and still a qubit. It's the depth So the depth is square root of n and instead of this performance, which is pretty good with fully connected qubits Where you have a number of and still a which is linear and the depth that is constant You get these performance. So when we simulate them even at 10 to minus 6 noise rate We are still At a logical error right far above that. So the threshold is if it exists. It's below 10 to minus 6 And I I will conjecture it does not exist And the issue is this depth because the depth is too large, but we know that we have a bound that tells us we cannot do better than that So we are stuck in we are stuck in 2d and this is the cost of locality. It's about four orders of magnitude In this regime, so we we need something more. We need long range connections Sorry So 3d you have the same kind of bound It's not anymore a square root of n. It's another polynomial Maybe it's and so one third But I'm not sure to you But in any dimension we have a polynomial depth circuit We cannot prove that there is no threshold But I believe it's okay Yes There is some assumption. That's why I wrote informal the two main assumptions are that the code Need to have constant encoding rate. So linear number of logical qubits and the second one is about expansion We assume that there is some form of expansion in the graph and this expansion is creating The issue so it's typically true for random codes Okay, I have four more minutes. I just have a few figures to show you So what what can we do if we have long range connections? What is the most naive layout? How will you do it? I tell you I have a thousand qubits and I can connect them the way I want Yes, so we can do that. Yes, I try to make it a bit more pretty. So it's the same graph But I put it on a circle I'm not an experimentalist, but I don't want to try to build that and The issue is that because of all these crossings between the gates I will guess that when there is some noise on one of the qubit when you implement a C naught We need to implement C naught along these edges It's gonna spread the noise and all these crossings may spread the noise to many qubits So quickly everything is noisy. Our circuit is not gonna work. It's gonna introduce too much noise So our goal will be to have a small number of crossing gates and a short depth And we allow for long range connection What what do you mean by experimentally? So I'm not aware of any long range connection that satisfies the properties I need so I But I'm expecting that when you have crossing wires you have some some cross talk between them It seems to be the case in some hardware I cannot claim that it's a case in everything But there is some interference. That's more a theoretical motivation for the model and a motivation from someone who doesn't know the physics So don't take it too seriously I mean we We can use dynamic decoupling for anything, but the noise rate we get is never low enough I want a noise rate of 10 so minus 4 for my qubits So we want very good qubits very good gates we Will always need our correction to reach this point to reach and ultimately if we want to run a quantum algorithm We want logical qubits with a noise rate of 10 so minus 15 So we really need something But yeah, I it's hard for me to claim that it's what we see in in a real device because People have not built all this connection yet, so and What we are going to do is that we're going to decompose this tannograph in Planar layers because they are planar there is no cross there is no crossing and We can always do that when the degree is 4 the number of planar layers is half of it We can do that for any graph and we get a layout that looks like that We it works for any CSS code if the degree of the tannograph is delta Then we we build a center of extraction circuit That has delta over two planar layers So it looks like that for the hypergraph product codes. It looks like some layers with Yeah, you see that in this layer in this top layer There is no there is no connection and this is for a year like we do have in in the The code I explained earlier the code is a hypergraph product codes. I described earlier the degree seven we get four years and The depths of the circuit is twice the degree of the tannograph So because this thing is bounded both of them are bounded for any sparse family And we can improve the depth a little by a factor two Now what do we get does it work numerically? Yes So This is I will answer the first one not the second one. So the first question What gate do I have? I have My standard gates except that the c-knot is long-range c-knot So it's going to be a c-knot between one qubit and another qubit connected by an edge so I need many long-range c-knot and Why am I not answering the second question? On each layer. I need many of them So we are we are still far from this kind of thing, but we we want to build large codes We need many gates And why am I not answering the second question because I want to leave it open I I don't know what's the best way to build these things. There is many different strategies Maybe you want to have four qubits and to entangle them with a four qubit code Maybe you want to have some kind of hardware where you have a chip with multiple layers I Want to leave it open. I don't know what's the best so This one I it's hard to see But let's say this one so this qubit is the same as this qubit on the column This is the same qubit the layers represent the gates that we are going to implement In different layers. So you can imagine you have all these cables that are crossing. We're going to make them parallel over four years yes, I Don't think there is any relation, but any any Centrum extraction circuit as a time direction. So it's a sequence of gate in time So any circuit can be represented like that here. This is not time. This is a physical object This is space. So we have a 3d object with a few layers Maybe I'm going to show you my last slide because I'm running out of time So the last slide is the result. So what what kind of result do we get first? What is this threshold? I told you it's about one percent for surface code with our noise model It's 0.7. It's divided by two. So it's bad news. It's not as good as surface codes, but it's not too bad easier The number of physical qubits we need It's 49 perological qubit. It's constant. It's not growing So for surface code. I was telling you 1200 This one includes and still a qubit and what kind of performance do we get so we considered three regimes? We have our physical noise rate at 10 so minus 4. So we have very good qubits and We run the Centrum extraction circuit with noise and we look at what is the logical array that we can get How many qubits how many physical qubits do we need to achieve? 10 so minus 9 10 so minus 12 or 10 so minus 15 which correspond to different quantum computing applications And in this case the saving is here in number of qubits in the in the most extreme regime Which correspond to chemistry to quantum chemistry algorithms? We have a 15x saving in terms of number of physical qubits So and even at the regime of 10 so minus 9 we have a 5x saving and these results make me very optimistic because We are using a suboptimal code a suboptimal Centrum extraction circuit. It's the first one proposed I'm sure we are going to find better in the future and a suboptimal decoder I'm going to talk about that next lecture and Despite that we do beat surface code Yes so That's one of the optimistic assumption. We assume that we can implement a long-range CNOT gate with noise P Which is independent of the of the distance between the qubits We we don't switch between layers. We Yeah, this this picture is not is not hardware It's it's a diagram When we switch we just say we are going to implement all the gates that live in this layer right now Then we turn them off and we implement all the CNOT gates that live in this layer But it's there is no really switch between them. We don't need to move our qubits. I Don't know where the qubits are. Maybe they are a little bit in the four layers Maybe there is some additional noise I'm convinced there is but I cannot tell you what is it because I don't have a physical Design for that. We need someone to give us an architecture But I yeah, I don't know how to do that. We could say maybe it's twice more noisy Then just shift the numerical result by a factor two multiply P by two All the gates are twice more noisy So you can get some intuition from that, but I I don't know otherwise how to design a noise model that is relevant And I yeah, I Our goal here is to motivate people to try to build this thing in hardware and to propose a real architecture Not only on paper to propose something more concrete and then we will be able to design more precise noise model depending on on the architecture and the hardware But this makes me optimistic. That's that's my conclusion I I'm happy to answer questions. I don't know if I need to freeze the room Yes, so During my first lecture. I told you I will tell you how to build a quantum memory So I'm not going to talk about logical operations There is a way to implement logical operation with a cheap cost with these codes. We didn't simulate that and It's yeah, it's there is a lot of work to do in that direction. So I'm not going to talk about that. Yes Yes Yes, we have a single yeah in all the simulation in both cases. We have a single noise parameter Every everything is noisy with noise right P independently of the distance. Yeah, I Yeah, for sure it could help to have very reliable gates for some types of gates But I think even with a 3d 3d lattice of neutral atoms that we can move With we are still subject to this bound Because moving atoms is part of the circuit that we can implement. So I Don't know how to escape this bound But in for practical concrete numbers it can help for sure. Thank you