 I'm very pleased to introduce her. Before I do so, I just wanna say that as part of PCMI every year, we have what's called the Senior Scholar Role. It's often funded by the Clay Math Institute, but various years it's funded by other people. This year it's funded by the Institute for Van Study. We call her the Karen Ulenbeck Distinguished Scholar. And so her role is to just be here, talk to people and be a guru. So we're extremely, extremely pleased that she agreed to come to be a great resource to people and hopefully it will be good for her as well. And I'm very happy that she's been willing to give a public talk. So Barbara Terrahall is a very distinguished scholar and in the world of quantum computation. She received her PhD in Amsterdam and has been head of health positions at IBM and Caltech and the Forschungszentrum Jürlich and is now at Qtech and Delft. And again, I'm very pleased to have you here. Thank you for coming. Okay, yeah, thanks. All right. Okay. Yeah, I'm happy to be here. I know the focus here is on mathematics and I like doing mathematics, but I also like to interface with physicists and a particular experimental physicist. And so my talk will be a little bit of a wild mix of everything, which I'll see how it will go down and also was trying to judge a little bit what the audience would be. And some people have heard about quantum error correction and the very nice lectures by Delft here. So for them, there may be not much new in this lecture and other people just already know about this and but maybe there are other people who are quite new to quantum computing or barely know what a qubit is. So trying to address everybody. And I'm just, I think, just trying to make some global points of perhaps interest. So the first point I wanna make is that for me, at least it's very clear that quantum error correction and full tolerance, the application of a theory that we've developed is a real necessity for building a digital quantum computer. And a digital quantum computer is a device that can do everything we could do classically on our classical computer, not with Amy with the same speed or same clock speed, but it could all the classical computations. And then on top of that, it could run quantum algorithms. So that's a very ambitious goal. Maybe it's two ambitions and maybe 50 years from now we'll find we'll have some completely different device, some completely different beast, because also classical computing is moving away to different paradigms like AI and so on. So we'll see. But there is no fundamental no-go that we have other than it being experimentally quite hard that shows that it's impossible. So you can argue as a theorist, then you say, well, okay, what can I contribute as a theorist? Well, I can help the experiments that to succeed. I can develop different schemes, like also what I think Nikola Stilfos has been discussing in this lecture. I can try to analyze the data that they get in the experiments. And we're just gonna try and learn what we find, so to say. All right, so I think what is nice, or at least I think I should, some people I think find they have a little bit impatient with quantum computing and maybe this comes also from industrial interest. So it sort of has to pay off in a short term. And I think it's very important to have sort of the long-term perspective on this. And also that we had to start from scratch entirely with quantum computing in many levels. And so the point is that everything that we engineer and we do in real life or in technical life is full tolerant. Everything we do is prone to errors. And we wanna keep functionality despite these errors, right? You think about storage on a hard drive. You think about copying DNA. You think about solving, doing a calculation. If you're very bad at doing calculations, you do it three times, you take a majority answer, and maybe you're more likely, of course it's not advisable to proceed this way, but you debug your code, et cetera, right? So error correction is ubiquitous. And any engineering has to be full tolerant. And here I've shown sort of a caricature, a little bit of what we wanna achieve is a paper clip. So a paper clip doesn't lose its functionality if it's not quite the same shape as you bought it. And that's sort of what we want. And so it is robust towards small variations in shape. But where we're at with qubits is entirely a different thing. We're actually at the, you know, what we have is a qubit or physical qubits that any small noise disturbs their state basically. Okay, so let's talk a little bit about what a qubit is if you're sort of a little bit new to this. So, you know, at this basic level, a qubit is a two-dimensional vector in a Hilbert space and a two-dimensional complex space. But we like to represent this vector. We can also represent it as a point on a three-dimensional, on a ball, basically a sphere. We call this sphere the block sphere. And this point, all right. Okay, we're back square one. Okay, so I have to log in. And all of this is, I'm not gonna go through the, okay. Sorry, okay, so maybe I have to, okay. So here is the, what we call the block sphere and every point on the sphere of unit length, we have vectors of unit length. And every point is a potential state of a qubit. And so here we can characterize the qubit by two angles, theta and phi. And in quantum language, we can also represent it this way. And the north pole is the state zero. The south pole is the state one. And what we want is we wanna keep it where somewhere, you know, on this block sphere, so we have some point there, we wanna keep it there, right? But noise will let it drift or let it do, you know, have small rotations and that's our problem, basically. So small variations of this point in this block sphere will immediately to loss of functionality at the same. All right, and we also wanna manipulate this qubit in the block sphere. For example, we wanna do something like a pi over two rotation. So rotating zero to the equator, for example. Or rotating the state in the equator around. And those operations are, you know, what we say logical gates that we do on single qubits. And in order to build a quantum computer, we have to do these gates very reliably on single qubits. And then we also have to take multiple of these sort of balls and entangle them. So we have multiple qubits and we need to do gates like the controlled not gate. The controlled not gate is very simple. So if one qubit is in one, you flip the other qubit. If one qubit is the same qubit isn't zero, you don't flip the other qubit. So that's just a classical gate. We have to do it on vectors that are described that are superpositions of zeros and ones. Right, and so the point where we're at right now is that if we think about our physical qubits, now this is not a physical qubit, it's just a representation, a mathematical representation of a physical qubit, that we have noise sources that lead to error rates which are 1% 0.1% per component at best. So basically if I do a C not gate, I would have an error rate of 1% or a little less. If I let the qubit idle for some period, I'd have this error rate, if I do a measurement on the qubit, I have this error rate. And this is what we want, we want these error rates to go to like 10 to minus 15 or so. And there's no physical system which is sort of naturally will come about or it would be very hard to design such a system. So that is what I mean by we have qubits which are very different from our paper clip and we need to design better qubits that have a sort of built-in robustness. All right, and so as I said, we have to really start from scratch. And so in that sense, we shouldn't be impatient thinking that, this should come out very soon because what you can see, I would say for the last 30 years, we've built a beautiful theory of quantum error correction and at the same time, people have built physical qubits that now have these error rates of 1% or 0.1%, which is already very, very impressive. So in some sense, I would say experimentally, we've gone from the situation of like no life to bacteria. But on the other hand, you could say, well, bacterium doesn't look like a human being which would be sort of the end point of the digital quantum computer. So we've made great strides but there's still a lot to come, particularly to show that this quantum, the theory that we've built is actually useful for building better logical qubits. So what I wanna do in my talk is kind of give you a little bit of the basics of this theory and then talk about the experimental situation. And there you see there's some friction in terms of, well, what are the challenges that we have to overcome? All right, so the idea is of error correct, quantum error correction is actually very similar as just classical error correction. Like if my silly computation that I'm very bad at doing arithmetic, I just do the computation three times in parallel and I take the majority of answers. So the basic idea is the same except it's not so simple. Because first of all, I don't just have, if I write everything in terms of bits or quantum bits, I just don't have only bit flip errors. I have also some different type of errors, which is a phase flip error. And secondly, this idea of just, doing everything 10 times over is too simplistic. Okay, so but what we do have is we have this thing about reducing redundancy. So what we do is we use n physical qubits n could be seven or nine or 300 and we have different codes for which this number n is different. And we use them to represent one better logical qubit, right? And what we want, we want to do operations of logical qubit like we've done in physical qubit. And we have to argue how we're doing this, exactly. How can this be technical details? All right, so this isn't bit so well, okay. Yes, it's still a bit unstable. All right, so mathematically what happens is if I have n physical qubits, I have a two to the n dimensional vector space. And I'm just gonna choose a two-dimensional subspace in there to put my qubit. So in this two-dimensional subspace, I have a basis, one vector I call zero, the other vector I call one. And I wanna be able to create any state in the subspace and my choice of the subspace should be such that the likely errors that are happening, they shouldn't be moving the vector in the subspace around, rather they should be mapping this vector from the subspace outside of it. In a way, in a very particular way, in a way that I can reverse the error, right? So what I want the error to do is that maybe I've mapped it outside the subspace, but there should be a way of mapping me back into the subspace and I should be coming back to the same point, right? So that looks not so easy, not so, it's not any subspace will be good, because the error, and but it has to depend on the structure of the likely errors, basically. And it should also be something where if I wanna do the reversal, I should be, I could be doing this by say measuring whether I'm outside the subspace and where am I outside the subspace and then saying, okay, if I'm there, this is how I sort of do the correction, right? So there's components like I do a measurement about where I am and then I have to do some processing of this measurement, which is called decoding, which determines which error I think should have happened. Yes, so that's basically sort of the very overall, but then you can say, well, two-dimensional subspace, that's not, how can I efficiently describe this? So there's a beautiful formalism of codes, of stabilizer codes that has been developed that allows us to describe this subspace that I wanna use very efficiently. All right, now, what a little bit about the errors. So actually, I don't need to have errors that map me out. The subspace, it could be a superposition of being in and out. With some amplitude I stay in, with some amplitude I move out. Oh, geez, this thing is messed up again. Maybe I should just hold this. Okay, all right, so in principle, the errors that I have on physical qubits, they'll be continuous, in the sense they'll be, this point on the block sphere will be slightly changing. But maybe you can understand that, if you do rotations on this sphere, there are only two independent axes of rotation, so the generators of these changes are only two different directions. For example, I could be using the X and Z axis, so I can get any rotation out of a rotation around the X and the Z axis. And so this leads to the fact, I don't know what's going on today. Okay, slight break. I have to have my computer go to the IT. This is falling asleep. All right, so we were good for another five minutes and then we have another event like this. So in some sense, the errors seem continuous, but they're actually discreet, because they're generated by errors along two different axes, namely the X and the Z axis. And another way of saying this, if I have these errors, the bit flip error is a two-by-two matrix, the Z error is a two-by-two matrix, the product of them is another two-by-two matrix, the Y error, and any Hermitian matrix two-by-two matrix can be written as a linear combination of these matrices. So they generate the possible changes that can happen to the qubit. Now the important thing, of course, is that if I have classical information, I only care about bit flips, but if I have quantum information, I also care about phase flips. And so what this phase flip, the Z operation does, it maps Z, it maps one onto minus one, and if I only work with bits, I'd never see this minus one, but if I have a state, which is a superposition of zero and one, it would be mapped to a superposition of zero and minus one, right? So it's see these phases. And so there's sort of double trouble, and this double trouble is also one of the causes why quantum error correction is intrinsically harder than classical error correction, why we can tolerate less noise, and some features which sort of, which show that it may take a little longer to get the quantum error correction to work in practice. Right? And so here, this point I want to say here is basically that if I just use simple classical error correction code, like the repetition code, so instead of a zero one, I have n zeros or n ones, that of course, if I take majority votes, if the half of them or less than half of them flip, I can correct it, but that idea won't work if I'm gonna put the zero and one as superposition. So if I wanna preserve a superposition of zero and one, that works fine with bit flip errors, but with phase flip errors, this type of state is immediately because a single phase flip goes to this state because I can put z on any of these qubits and it will become this other logical state. And so we really need to invent quantum error correction codes and these has been invented to deal with this double trouble of x correcting x and z errors. The other effect or the other feature is that you have to, so if you have classical memory, you can say, okay, some bits have flipped, I measure every bit, see if it's zero and one and see what the majority of the values is, but you can't do that with quantum information. If you're gonna measure your logical qubit, you measure every qubit in the zero and one basis, you also lose your information. So what you want actually, you wanna do some measurement that tell you whether you're outside this code space, outside of two dimensional space, without telling where you are, where you were in this two dimensional space. So no logical information should be revealed by the measurement, otherwise there's sort of a collapse of the wave function, but there should be full information about, you know, how far or in what extent you're outside this code space. And for that, we use some extra qubits typically, these are called enceler or measure qubits. And so if we think about overhead, so we encode one qubit maybe into nine, but then we need a bunch like say, eight enceler qubits to measure, okay, there we go again, to measure the errors. Okay, sorry, this gets the annoying for you, but for the moment I can't do anything. I think I just have to keep on, you know, activating these slides or something more readily. Okay, so, okay, I'm gonna take my glasses off. Okay, so we need extra qubits which adds to the overhead, which is not ideal. Another feature of quantum error correction, which makes it less trivial, relates to this idea of, well, transversal or blockwise. So if I use the repetition code, or I just do my classical computation, three times on three copies of all the bits that I used to have, it's clear that if I do this in parallel, the depths of the circuit is the same, there's just this copy overhead, but nothing else. And that's the type of way of, doing gates we call transversal or blockwise. And here you see an example. So for example, I had to do a single qubit gate on a single qubit, and now instead I have like seven qubits, and now in order to do this gate on the logical level, on the single qubit encoded by these seven, I just do gates on every one of them. So that's a very efficient, nice way of doing it. It's called transversal, but not all, there are theorems that say that not all gates can be done transversally, and we need constructions that look more complicated like this and that are then more prone to errors. Okay, so that's another issue where quantum error correction, yeah, it's more complicated simply. So the offset of these considerations is that for quantum error correction, we require fairly low error rates. So what Nicolas Del Foss also said in his lecture, if you were present, is that sort of the best error correction codes, they work if the error rate for component is 1% or less. That's more like close to 0.75%. And we're not there quite yet, experimentally at these low error rates. All right, so now I want to say something about surface codes because what? Because they're the most favorite type of way of doing error correction in the very short term. And also because they lead into me talking a little bit about topology and about some extensions of surface codes that may be of interest to you if you like polyhedra. Okay, so that's just, yeah, yeah. 0.7 rated is achievable for that. Yeah. Yeah, well, okay, I would say it is hard to achieve that error rate. It's not completely, no, no, okay, we're close, but okay. And then you have to maintain that error rate if you have many more qubits, right? And also over many days and many, they're sort of caveats. It's not a crazy error rate because in early fault tolerance, okay, I have to stand here for the recording. In earlier work, we were talking about error rates of 10 to minus 5, which is a lot lower. So this is not so bad. It's not crazy to think about this. Yeah. And there is continued progress towards these low, lowering error rates per component. Yes. Is it actually practical or over that? This 0.75? Yeah. What do you mean with practical? Like the number of physical qubits. No, no, no, but this is just for the surface code, this 0.75, right? So that's not an a very efficient code in terms of the overhead, but it is a code that has a fairly high threshold. The point is that, you know, if you come up, there are other codes that may have higher thresholds, but they are not so embeddable into dimensions on chips and so on, right? So it's kind of surprising that, yeah, okay. So let me get back to, let me go to the surface code, this explanation. So a surface code is a family of codes. So this number N, you know, one into N is varying. So N can be nine or it can be 25 or 49. And so the idea is that the more redundancy, so the higher N is, we hope to achieve a better logical qubit. So if we do operations with logical qubits, they'll be less prone to errors and this should scale exponentially in the square root of N, actually. All right, okay. Four minutes, all right. So probably set to some, yeah, yeah. Yeah, indeed, that is better. Maybe I need an assistant to, okay. Yeah, okay. Yeah, Scott shouldn't have asked this question. He's really screwed up. All right, okay, glasses off. All right, so we have, so let's look at how we describe these codes. All right, so if you've seen surface code, this is not, it was not very new. So, okay, so what we have here is actually nine qubits and they live at the vertices of this lattice, three by three lattice and you see these black squares and these white stairs and what did they mean? So the nice thing about these types of codes is that we specify this two-dimensional subspace out of this two to the nine-dimensional subspace very efficiently, namely, we say that these qubits have to obey some parity checks and let's look at what these parity checks look like. For example, these four qubits and that's denoted by this white square, their parity should be even, right? And they'll say, well, what's the operator that measures parity? Well, for qubits, this turns out the product of the poly Zs on those four qubits and similarly, the parity of these two qubits should also be even for the qubits, for the, you know, the vector, the qubit vector to be in the code space in this two-dimensional space. Now, you see also there's some black squares and these black squares essentially do the same thing as the, you know, the Z squares except they do this in this rotated basis. Remember, the block sphere, you have a Z axis and there is an X axis like this and there is a rotation that relates the two. So you just can think about the, what we call the X check is like a parity in a rotated basis where every qubit is going to the rotated basis. And the nice thing is that these as operators acting on multiple qubits, they actually commute. That's to say, this block, this poly, you know, this parity check on those four qubits commutes with this parity check on these four qubits because they actually overlap on two qubits, namely these two, you know, these two here. All right, so the code space, this two-dimensional vector space is the space where all the parity checks have eigenvalue plus one. And so the thing to do to determine where the outside the code space is to measure the parity checks. And you do that using some insular qubits. And the nice thing about this is that you can place these insular qubits that have to interact with the data qubits to measure the parity checks. You can place them in between these, you know, these qubits at these vertices that are not shown. But so you can have a 2D plane or connectivity between qubits and this is nice for chip design as you'll see a little bit later in the slides. Okay, so the disadvantage of some, so what we want is we kind of say, okay, so what we want perhaps is not to encode one qubit into nine, but maybe one qubit into a thousand or 10 thousand, which is huge, of course, and we're not there. We're now at the level of encoding one qubit into nine or one qubit into 25. All right, so how does the error correction then proceed? Okay, so now these open dots are the insular qubits. So for every parity check, I've placed an insular qubit in the middle of this check. So for example, this qubit should be interacting with these two and extract the parity of these two bits or qubits. But the circuit I would do would be the same as if I just extract the parity of these two bits. Okay, so now without showing you how this actually works, these circuits that extract the parodies are of this form. For example, to extract the, you know, let's say I have this x check on these four qubits, I have to run a circuit where this open dot here is this qubit. I prepare it in a plus state and I do a bunch of C knots. I do some other things and I measure it. And the idea now is that if a Z error happens on any of these four input qubits, it will flip the outcome of the measurement, right? So the error gets detected, but the outcome of the measurement doesn't quite tell which qubit undergone an error, underwent an error. And so I have to combine information from this measurement and that measurement, all these measurement to infer what errors have happened. There's a similar circuit to measuring the Z checks and those Z checks, for example, these four qubits, if there's an X error on any of those four, then it will flip the outcome of these measurements. So we call a quantum error correction cycle, the execution of these circuits sort of as paralyzed as possible for all these checks of the surface code, right? I'm not showing you why this works this way. If you've seen it before, then you probably understand. If you haven't, then you just have to trust me here. All right, so what happens, actually, is something like this. Let's say there is an X error that's happened on this qubit in the code space. I don't care where it is in the code space, but just a poly error, X error, so this bit is sort of flipped if it were a bit. And this X error will be detected by these two Z checks because it will flip the parity of this group and the parity of this group. And so that's why there are two blue dots here. And similarly, if there's an X error here, I will see some effect here for Z error here, I'll see some effect there, et cetera. So the information I will get are these blue dots, and I'll do this type of measurement over and over again, so I'd get these collections of blue dots and then I have to say, well, what errors have happened? And of course, these parity checks circuits are also executed with the same noisy gates, right? These C not gates are noisy and imperfect, the measurement's imperfect. So, you know, I don't always get the measurement that corresponds to the actual error, I may get measurements that I can't completely trust. So this, the task of what we call the decoder is to actually figure out what the actual errors were. Okay, so now the thing that, so this code, this surface code, this little surface code can correct a single error. So the claim that I have a single error on any of the nine qubits could be an X or Z error, I can correct it. And you can understand this as that the single errors, they lead to sort of distinct patterns of blue dots, so that I can say, hey, it's this or hey, it's that. And sometimes there's some ambiguity, but some ambiguities we don't care about, but we do care about this type of ambiguity. Of course, for example, it could be that there's an X error here in the middle, we'd have these two blue dots, but that same blue dots would be caused by a pair of X error, one here and one there. Because this X error flips the parity here and this X error flips the parity there. So I wouldn't be able to tell the difference between this or that, these same, the same blue dots. So if a single qubit X error would be equally likely as this thing, I'm kind of lost, because I don't know what to choose for. So we say this code can correct only a single qubit error, and what we do is we always choose a single qubit error as the error that we think will have happened, but that means that if a two qubit error has happened, which we consider less likely, we will make a logical error, basically. What we have then, let's say a two qubit error happens like this one, and I'm inferring this one. So effectively, all these three qubits have flipped, and all these three qubits flipping is actually what we call a logical error. Yep, so I haven't actually mentioned what logical operations are in this code thing. So if we say we have the surface code, I said there's this two-dimensional subspace I have specified, and I can specify when I'm in the space, I have the op-ed disparity checks, but I can also say, well, what's the state zero in this space, and what's the state one, and what's the operation for me to get from zero to one? So the operation from zero to one is what I call the logical X, because it's the logical bit flip that flips the logical bit from zero to one. And there's similarly what we call logical Z, which flips the logical qubit plus to minus, and vice versa. And so important feature of the code is are these logical operations? Okay, so okay, so that was sort of my intro a little bit in the surface code, and we'll see this come back, because now we're later in the talk, we're gonna see some chips that are trying to implement the surface code. But just as a general consideration, you know, there are many, many code families, and the surface code is only the tip of the surface. The surface code was actually introduced by Kiteyev, not as the surface code, but first as the Torah code, and it had a deep connection with topology, and actually we should look at this torus. So the idea is to define the Torah code, what we do is we tile the torus with some square letters, like in this picture, and then we put qubits on the edges. So physical qubits are on the edges. Of course, I can have a very fine tiling or less fine tiling, and that sets this end of the encoding. And then what we have, we'll have checks that correspond to the faces, so every face will correspond to a z-check, so all the edges around the face will have to have even parity. And with every vertex, we also associate, I have to do this, with every vertex, I'm also associating a check, and that is an x-check. So if I have a vertex, there will be four edges coming out of this vertex, and I do an x-check on every one of those four checks. Again, these checks commute, and then the thing is that, why this is an interesting code is because actually this code, the information that is encoded relates to the topology. The topology of a torus, well, there are two non-trivial loops, right? And the point is that each non-trivial loop corresponds to the logical operation of a qubit. So this code encodes two logical qubits, and one logical operation runs like this, and the other logical operation runs like that. And you can clearly see if you take this tiling to be finer and finer, then the number of edges, or the number of qubits that this logical operation goes through, it's more and more and more, which means that the logical operating gets longer, and the longer they're the larger, the more qubits the logical operation touches, the better protection you have, right? Because you want to avoid that errors cause logical operations. All right, so you can do this, but then you can also move away from this. You can say, well, I could, instead of taking a square tiling, I could be trying to tile with pentagons. Here's here some pentagon. It doesn't matter the shape here, because this is some representation of the hyperbolic plane. But what the feature is that you have five pentagons coming together at one vertex. And again, you can say, I'm gonna put edges, I can put qubit on the edges here, and for every pentagon, I have a parity check that acts on the five edges. And for every vertex, I have a parity check that acts like ick, icks on the five emanating edges. And again, these commute. And now the point is that if I just tile, if I try to tile the flat plane with these pentagons, it doesn't work. If I do that with pieces of paper, which are little pentagon-shaped papers, the paper will start to crumble because I can't get the angles to add up correctly. And so this is why this is a tiling of the hyperbolic plane. And, but I could be sort of tiling this, but then in the end, I'd like to have some closed manifold. So here, like for the tourists, I want to actually sort of have boundaries here. Sorry, add, identify edges here at the boundary so that I close this. And then what I get is actually something like this. It's a many-handled tourist. And then actually for every hole, you will have two qubits like what you have here. Now, why is this actually interesting? So why you could see the same thing. You could also tile this with the square tiling. Why would you do this with these tiles with pentagons? It turns out that because of the hyperbolic geometry, you get codes that can correct, that are much more efficient in terms of overhead. So I've been talking about encoding one into N, one logical qubit into N qubits, that's what the surface code does, the torr code and code two into N. But I could also try to encode k qubits into N. Switch this, k qubits into N, and I'll try to make k as high as possible. So I want to have the least N sort of as small as possible, and they get the best possible performance. Okay, so just as an example, so we've examined some of this. So these codes are called hyperbolic surface codes, and they are sort of the smallest, sort of departures from the surface, from the actual surface code, in the sense that I can almost, for a finite code of finite N, in which this sort of ends at some point, and I have to sort of identify these boundaries, that's fairly unlocal. But other than that, I can almost sort of represent it in two dimensions. Not quite, but still it's not completely crazy. And actually turns out that the smallest, one of the smallest of these codes is related to the stellated dodecahedron as a polyhedron. So let me say one thing about that. So, you know, I said we tile a torus, and every face becomes a z-check, and every vertex associated an x-check. But I could also be tiling one of the platonic solids. But of course, like a dodecahedron, but this is a topologically trivial figure, and therefore I'll actually encode no logical information. But I could be defining something. I said, okay, every face is a pentagon, so I put qubits on the edges, and every face is a pentagon, so I have a z-check, so this should be five c's actually, a z-check with every face, and with every vertex which acts on three edges, I have an x-check. But because this is topologically trivial, I have no encoded information. And then I think actually in the Renaissance, people looked also at what was called the stellated dodecahedron, and the stellated dodecahedron is obtained by taking the dodecahedron and just extending the edges such that they meet at a point. So I'm extending this edge out, and this edge out, and this edge, and then, and this edge, and then they'll come to a point like this here. So the core of this figure is like the dodecahedron, but then it has this stellation. And now I'm associating actually, now this is a vertex of my code, and I'm gonna associate an x-check with that. So previously I had an x-check associated with this vertex, but now I'm generating a new figure, and with every vertex of this stellated figure, this thing is also running out of battery. Okay, so great. Do this. All right, so with every vertex, I associate an x-check, and that acts on the five vertices that are coming out of this, five edges that are coming out of this vertex. And the original faces of my dodecahedron code will actually be remained because I'm still putting a qubit on this extended edge. Yeah, and this is now fuses, this really, okay, so this is not working. I guess, I don't know. You have another one? I guess it just needs. Okay, yeah, yeah, fine, thanks, yeah. All right, thanks. All right, and okay, the point about this is actually the, it has, you wouldn't think this, that this figure has non-trivial genus. It actually has genus four, and it encodes eight logical qubits, so two per whole. And this was, and this comes about, it's not, the thing is that this has normal faces, but this actually has a very funny face because one face is like the original face of this dodecahedron, but now this becomes like this, this, this, this, this. Those are the edges of the face. It becomes sort of a twisted face. And this is why this can have sort of non-trivial topology. You can look at many more polyhedra, and you can say, oh, this is a code, I didn't code this many qubits, and I have to do these parity checks and so on. So these are nice, but you can say it's sort of a game because I haven't seen anybody interested in representing or realizing this code, unfortunately. But it is a code that is very efficient because I have this has 30 edges. So I start with n is 30, and I'm encoding eight logical qubits, and this code can correct a single error. So in some sense, it's a lot more efficient as a code than the surface code. Now you can see this figure is not necessarily embeddable in 2D, especially if you take these hyperbolic surface code, you make them much larger, they become harder to embed. But you know, it's still, I don't think it's sort of an entirely no-go compared to other code constructions that we know. All right, so I don't know how am I doing on time, but now I want to switch. So this was my intro a little bit in a theory and making some connection with topology. And now I wanna talk a bit about the experimental issues and I'm gonna illustrate that by my favorite platform which is superconducting transplant qubits, just explaining a little bit what errors come about or what they look like in practice because this is slightly a different story. Okay, so a lot of people actually worry or complain about overhead. So the surface code has a large, I need to make a lot of qubits. Now this in some sense is an understandable point. On the other hand, the point is really that we're trying to do things with classical control and classical hardware or electronics that is sort of old school. And the point I wanna make is that I feel the field of quantum computing, in order to realize its potential, it needs a lot of classical engineering development. There's nothing so much to do with quantum coherence or something, but now they take things off the shelf but they need to develop these further. Of course, as a mathematician or theorist, this is not an area where you can help necessarily but it is important to realize. So if you take a transistor, like a MOSFET, you say, okay, there is some charge and a capacitor. How many electrons are there? Well, there are gazillions of electrons but of course I don't have to control them individually. I say, well, there's a huge overhead of electrons. Well, nobody would ever say this because you don't have to control them individually. The point of qubits is that you do need individual control because you need to do individual measurement of qubits and these physical qubits that are used for the service code, you need to individually measure them or typically this is what people do. And you need to, they do single qubit gates and some controlled not gates and so on. And that's where the real overhead is. All right, so meet a qubit is called the transmon. All right, now a transmon is actually not a very complicated qubit. Well, because what we can think about is a lot of physical systems are described by oscillators. Right, if we have a pendulum in a gravitational field, this is a slightly anharmonic oscillator. You can drive it classically and of course if there's very little friction, the pendulum keeps on swimming, we can quantize the pendulum and of course then there will be discrete energy levels and there will not just be two of them but there will be an infinite number of them. And if the pendulum is approximated as a harmonic oscillator, all these levels will be equidistant. And so these levels are depicted like this. So the lowest level we may call zero as the state zero of my qubit, the first one we call one but then there's two and there's three and in this diagram you see something that's not an harmonic oscillator but an anharmonic oscillator. We do like a system to be anharmonic because what we want is we want a qubit. We don't want a system with many, many energy levels. And the problem is when we drive, when we sort of, you know, this is the picture of driving. So for example, we wanna do gates which maps zero onto one. We don't wanna inadvertently excite these higher energy levels. And when these energy levels are equidistant, we cannot actually avoid this very easily. So the transmon is sort of like this, you know, anharmonic pendulum, actually the equations, classical wation motions are exactly the same but it's realized as an electric circuit or an approximate electric circuit. So there is a, you know, electric analogy of a harmonic oscillator in which kinetic energy gets converted to potential energy and vice versa. And that is the system where magnetic energy gets converted to electrical energy and vice versa. And that is an LC oscillator. All right, so a transmon qubit is sort of an LC oscillator. And the first thing is that there is a capacitor. So what's a capacitor? Just two large plates, which are usually, you know, metal plates. But in our case, we actually work, you know, at metal plates which are at the very low temperature and they're superconducting. So here you see two structures and they form a capacitor. And then besides this, I have to have this L term, which is the inductor. And the inductor is a bit of a funny element because these are two superconductors. And here is a very, very tiny structure. So these two superconductors almost touch but there is a very small oxide in between them. And what can happen is that pairs of electron can kind of tunnel from one superconductor from one of these plates to the other plates. And the description of that, or that element, is called a Josephson junction. It's very hard to make these Josephson junctions very precise because this oxide they have to put down is nanometers thick. And if you do it slightly thicker, it changes the qubit. It changes the qubit frequency. It changes the, you know, the energy between the difference between levels one. It changes other features of this qubit. So we have to operate this qubit at very low temperatures because we work as a superconductor. So this is operated in a special fridge. You call it a dilution fridge at order 10 milli Kelvin. All right, so what are the errors on the qubit? So I've talked about bit flip and phase flip errors. Now that is, it's true that any error can be decomposed into those or the, you know, in the product of those. But physically, the errors have a different origin. So the first point is that if you look at this qubit on the block sphere, so it's a point, but it's actually not sitting still. If I leave this qubit alone, and let's say it's very coherent, this vector is rotating. It's called precession on the block sphere. And you know, I'm assuming, you know, in my, if I want to do, you know, if I want to run some circuit, I'm assuming that if I don't do a gate, the qubit sitting still, right? Now fortunately, if I, if the qubit is processing like this, around the z-axis, the probability that I measure to be zero or one is not changed. It doesn't matter whether I'm here or there, as long as my projection onto this z-axis is the same. Right, so it's not sitting still, but let's say, say, you know, the frequency at which it's moving, that is related to the energy difference between zero and one, what I just showed. And that's the characteristic frequency of the qubit. Or if I have an oscillator, it's just the, you know, the frequency in which I have oscillations. That's what it relates to. Now, if this frequency is slightly wobbly, it changes in time, then I don't know where this qubit is anymore, right? Is it there, is it there? Because sometimes they're moved to faster than others. That process is called defacing, and there's a characteristic time in which I kind of lose completely where I am in this sort of plane. Am I here, am I there? I don't know. So that for a transmon is happening in order 10 microseconds. Right, 10 to minus five seconds. Looks like a very short time, but of course we have to compare this with how fast we're actually doing operations on this qubit. Okay, so let me just do that. Okay. So that is loss of phase information because we say that there is a phase which says how where I am in this plane, which is this phase, exactly. Now, there's another mechanism that the qubit is subject to which is energy exchange with other systems. So I said that this point zero has different energy, has a different energy levels, lower in energy than this one. And I'm sort of embedded in a very low-temperature environment which means that one is more likely to sort of drop down to zero and emit a photon to the environment than the other process. And this type of process in which exchange energy with the environment, which could be very general, there are many sort of mechanisms that contribute to it. Sometimes it's hard to figure out what does what, but it causes some relaxation time called T1. And so this T1 is sort of a measure of like if I were to prepare this qubit in one and I wait for a while, what's the characteristic time in which I'll just find it in zero? Because these qubits, they live at a very low temperature, so they much like to be, in their steady state, they're much more likely to be in zero. So if I put them all in one, then they'll sort of fall down to zero. This happens at a time, also similar order tens microseconds, and we're trying to stretch this, so there are possibilities to lengthen this time, also this time. So one of the advances that has to happen or is happening is that we can lengthen these times of the qubit sort of falling apart without me doing anything to it, right? That's the first step. Then of course I wanna do gates, I wanna manipulate the qubits. So for example, I wanna drive it from zero to one, or I wanna do this sort of pi over two pulse on the block sphere, so go make this trajectory from zero to the equator or from zero all the way to one. So the way I do that is by, applying microwaves, because why microwaves? Well, it turns out that this energy difference between zero and one is exactly in a microwave regime, so five gigahertz, right? So this is a Wi-Fi frequency. And so you say, well, I got to get a Corex cable and I put my microwaves in. Now this doesn't happen at room temperature and it doesn't happen like in a regular microwave that you had home at that high intensity. The number of photons you sort of, subject this qubit to is very, very low. And so you generate the signal at room temperature and then it's very much attenuated, it goes into this fridge and then there it manipulates this qubit and does this rotation, or have this rotation or if you run it a little longer, this is full rotation. And so you manipulate the qubit with microwaves resonant with this characteristic frequency and then you can do this rotation in order 10 nanoseconds gate time. So that's good, nanoseconds 10 to the minus nine. So that's a thousand times faster than these coherence time. So I can kind of do a thousand of these things before the qubit falls apart, right? That's where this error rate of 10 to the minus three is coming from essentially. Yeah, so that's not bad. And the thing we all want to do, which I mentioned before, because there are higher levels, there's zero one and there's also two. We don't want these microwaves to excite these higher levels. We call this leakage and we have to deal with that in error correction. Okay, so the other thing we want to do is we want to take these two qubits, you know, two qubits on a chip and we want to do controlled knob gates. Now in practice for these transmom qubits, people often like to do control Z gates. So control Z gates, the mechanism is essentially is that depending on the state of one qubit, whether it's zero or one, it changes the energy difference between zero and one of the other qubits. And that's the sort of the, we invoke this sort of mechanism as the way to do this gate. Okay, so I'm not going to explain to you how to do this, but this also has a order 15 nanosecond gate time. So again, pretty fast. So that's all good. The measurement of a qubit in this platform turns out to be more involved and actually intrinsically take longer. So you see here, there's an order of factor 10 longer than these single qubit and two qubit gate times. And this means that while some qubits are being measured, other qubits have to wait a long time, right? So for example, if I do quantum error correction, I have to measure ancillar qubits to determine whether errors have taken place. And while I'm doing that, I have to do that in a time slot like this. And my actual qubits that are used to represent my logical qubit, they're sort of falling apart. Well, not literally, this is still shorter than this is not microseconds, but it's fairly long. And actually, if you look at this measurement, so this is a fairly non-servile process which uses another resonator and you actually probe with also with microwave that resonator and the frequency of that resonator is changed depending on the state of this qubit. And that frequency change is what you actually pick up. So it's quite indirect. And then you get data points like these blue or red or black points. And this is sort of points that you get from when the qubit is in zero, one, and two. And you get one sample out of this sort of Gaussian look and distribution or this one. And then when you're in the blue region, say, oh, I call it a zero. When you're in the red region, I call it a one. And of course, there's some error that you incur by doing this besides from it being a fairly lengthy operation. All right. So, Scottle also referred to overhead. And now let's look at something. So this is a picture of a chip. It's, of course, just a schematic. There's a size here. It's one millimeter. And these Pac-Man-like things are actually representing transmon qubits. And so what this is, is actually a chip that's used for an experiment that I was involved in by the DeCarlo Lab at Q-Tech on seven qubits. So, and out of these seven, I have four qubits that are presenting my code. So I'm coding one qubit in four. And this is a code that can detect one error. It cannot correct it. For that, I would need to encode one into nine. The next bigger surface code can detect one error. And this error detection takes place using these three ancillar qubits. So these are A1, A2, and A3. So in this picture, I can identify here A3, A2, A1, and then there's D1, D2, D3, D4. And these Pac-Man things, you see kind of, you know, the two capacitor plates are just, you know, two parts of these things. But then there's a lot of infrastructure on this chip. And there's a legend here that says all that, what's going on. And let's look at that a little bit to understand the overhead. So orange, if you can see this, this is what's called a drive line. So it's like an in-plane coax cable through which you send microwaves at very low amplitude. And every qubit has its own drive line. Like here it is an orange line to go to this one. There's an orange line that goes to this one. There's an orange line that goes to this one. So, you know, if you imagine that the chip is bigger and everybody has to have a drive line, you can't get these drive lines all on the side of the chip. So you have to do something from the third dimension, which is what companies like IBM and Google have already figured out. And we at Delft are still, you know, struggling with, so to say. Okay, so much that is just, you have these drive lines because you want to do some single qubit rotations. Actually in this experiment, in some sense, you don't need it. Well, it's for some things you don't need it, but okay, you need, it's nice to have this control. Okay, but then there's also something that you probably can't read, but it's called a flux line. Let me just do this. Okay, flux line. And those are the yellow things. Again, they come from the side. And these are lines that are actually used to send them some current. And this current creates a magnetic flux through a little loop in this superconducting structure that's the qubit. And effectively, it changes the frequency of the qubit. Now, why do you change the frequency of the qubit? This is actually indirectly a mechanism to do the control z gate. All right, okay, so fine, we need this too. Maybe this is, we're not gonna understand entirely how this works. Then there are structures called coupling bus. So these qubits, for example, qubit, encila qubit A2 needs to do a control z gate with encila qubit, or with data qubit D2. So here we have A2, here we have D2, and there is a coupling bus, which is also a piece of sort of, well, it's a resonator, but there's a structure that goes in between, and that enables you to do a control z gate, right? And you actually, well, if you'd studied this thing, you would see that between D2 and D4, there is no, well, actually this has some, they're coupling, but it is not necessary. So only the encila qubits have to talk with the data qubits to do the gates. All right, what else do we have? Well, we have what's called a feed line, and these are also, again, microwave lines that are used to do readouts. So I think I mentioned readouts as we need some other resonators, which are in pink, and there are structures like here, and they're next to the qubit, so every qubit has another structure, and these structures are probed, they're resonators with the characteristic frequency, and their characteristic frequency is changed by the qubit state, and they're probed, again, by microwaves. All right, so this chip looks complicated, but it's, you know, you say, well, there's many lines, where do all these lines go, right? In the end, well, here you see a picture, these are lots of lines that then come out of this chip, and this is what you've, you know, the chip is somewhere in here, and these are all these cables that you need to sort of do all this massive control. Right, so you see here the issue is, you know, this classic control in overhead is definitely, you know, a challenge when scaling up. So this is the overhead for the Google's experiment with 49 qubits. All right, now if we look at the far future, what we would need, and these are estimates for, like, if you wanna run a competition that factors a large, we would need 20 million noisy qubits, right? So this looked kinda crazy if you see already the complication of this chip. So IBM has far-reaching plans, and other companies maybe too, to actually have this come about, but this is clearly, you know, far in the future. How am I doing on time? I have to, okay, close to, yeah, yeah. All right, now, I think I'm sort of wrapping up. So where we are now is, well, depends on where you work, where you're now, so the, so at Delft, we're trying to get this surface code of one into nine qubits to work, and there's been some problem with making a chip that's reliable enough to do these experiments, and we as serious can't help that much with it, but at the end of the day, what we'd like to do there is, once they run an experiment, they get information from these measurements about the errors, and that information is then, you know, sent to us, and then we do the decoding. And the type of experiments that we imagine running, and it's been done by a group in ETH, is something like this. So you prepare all the nine data qubits of the surface code, either in all zero and all one. Let's say we do it sort of as a game, like I come into your lab, and I prepare all the qubits in all zero, and all the qubits in all one, and I'm not telling you what you're doing, what it is, right? Then you go and measure parity checks repeatedly, and you do this for N cycles, right? So you measure the Z parity checks of the code N times, and the X parity checks of the code N times, and then you measure all the qubits in the Z basis. And then on that basis, you have to tell me whether I prepared all zeros or all ones, only basis of that information. You can't measure like every, you know, qubit individually. That's the kind of game that you wanna play, and the idea is that the longer N is, the less likely you'll be able to tell whether I prepared all zeros or all ones. That's the time. Now, okay, there's a slight variant of it because I actually do the noise. This is actually easier to do than if you prepare other bit strings, but this is the same idea. And you may wanna do the same thing for preparing all pluses or all minuses, and you do all these cycles, and then again you have to refer from the last measurement that you do, and all the error information that you found along the way, you know, what was the initial state? That is sort of an experiment that says how long can I preserve a qubit? How long does the initial state sort of, you know, remain there at the end of the day? Okay, so, yeah, and then you get curves. So you get curves sort of like this. So these are, so now I'm switching to sort of representing some curves from a paper by Google because they are really, you know, have had the rapid events in this, so this is a paper by 2023, in which they compared the performance of a surface code on a five by five letters that can correct two errors versus a surface code that is on a three by three letters that can correct a single error. That's this one into nine encoding. And the idea of that experiment was to show that the logical error rate here is lower, so in this memory experiment, it will take more cycles to completely wash out this initial information for this scheme than for this scheme. But they actually compared this performance to four small sort of experiments, and they found, okay, so let's switch over to here. Here what we have on the vertical axis, getting good at this. So what we have on the horizontal axis is the quantum error correction cycles. So this is this, you know, how long I'm measuring these parity checks. And I said like the longer it gets, the poorer, the more I, you know, I lose my initial information and that is captured by the logical fidelity, right? In the end, you know, I have sort of probability of a half of saying, you know, it was all zeros, all ones. I'm basically completely clueless. Okay, and this curve, so the point is that the blue curve are the data points for the experiments where I have the five by five letters, which lies slightly above the red curve, which are the data points for when I have the three by three letters. So it says that only slightly, the performance of this lattice is better. And this is, of course, the essential point we want to show that more redundancy, so a five by five lattice, and then soon we'll have a seven by seven lattice or they will have a seven by seven lattice, it will get you better performance. And this is a very crucial sort of, you know, transition point because, you know, once you're sort of above it and you think you can go to nine by nine, assuming that the error rates are the same, this, these curves, like I said, the error rate will, you know, drop exponentially with this sort of linear size. E three, five, seven, it will go down very rapidly, right? So that's where we want to be. And they're sort of, you know, almost there or not quite, I mean, I would say, right? Because it's still not complicated. All right, so now that leads me to my last slide after all these technological hiccups. Yeah, so where do we stand? As I said, the journey will be long and I want to illustrate that with a slide from a science paper from 2013. So that's 10 years ago, where the sort of outline was made, here's time, here's what we can do. We can do operations, single qubit gates. Yeah, we're done with that. We can run some variational algorithms done with that. We can do pretty good measurements. And now we're trying to make a memory which, you know, lasts longer, the more redundancy we have, or lasts longer than what we can do with physical qubits. And the point I want to make is that we're kind of still at this step, right? So the 10 years have passed, but we're still, you know, making those strides. And what we can expect in the future is some people have already done operations on single qubit, single logical qubits. But then of course you want to run algorithms and sing on multiple logical qubits. And then this holy grill full ton bonnet computation. Okay, so that was my sort of overview of what's been happening in this area. Thanks for your attention. Thank you very much, it was a great talk. So are there time for a few questions? Yeah, back there. Yeah, you. I think it would be helpful that, I mean, they do publish information, of course. I think, but maybe there are some secrets, you know, I think there's some more material science secrets that you don't always hear. And also some chip fabrication or chip variability things that are not very well shared. And maybe we should ask them in their papers when we refree the papers to ask more for more information about that. Yeah, and of course they also patent a lot of stuff to make sure that they are, you know, get credit for it at some point or, yeah. Okay, any other? So a lot of the quantum algorithms right now are using CNOT, but in the physical, like the one that you just showed, they are using CZ. It feels like people are thinking the Hadmar gate is kind of free of errors. Is this my wrong understanding of what happening? Yeah, yeah, the Hadmar gate, like it's one of the single-cupid gates. I didn't say especially the times, but the error rate of the single-cupid gates is typically quite a bit lower than a two-cupid gate. So to do an extra Hadmar gate is not a big problem. So if I want to calculate the fidelity of my quantum circuit, can I safely assume that if I need a Hadmar gate, it just kind of free of errors and then I could ignore it? Yeah, well, depending on what accuracy is a good first approximation, but of course it depends on what accuracy you want to reproduce things, but yeah. I think that's a fair, yeah. I think that the point is also that the problem is that if the controlled NOT gate or the CZ gate, if it fails it can put two errors down, right? It can put a correlated error down and that is also not so nice. Whereas the single-cupid gate is a single-cupid error. Right, sorry, I have another question. So this coherent errors is somehow related to this crosstalk when we were talking about the superconducting chips. And so when I was at another conference, people were telling me that although crosstalk could be handled by this dynamic decoupling, but in fact, it's actually very, very hard to realize. So as a theorist, like sometimes people would just say, well, you could put it on paper that, oh, we could ignore crosstalk because we have the solution. But sometimes people from experimental side tell me this is too naive. So how should that balance? Well, they do, they do dynamical decoupling. So for example, if ancilla qubits are measured, you're doing dynamical decoupling on the data qubits because then you have a longer phase coherence. So this defacing times improves. So it's not, I mean, I think the point's more complicated dynamical decoupling schemes may not be the way to go. So it's just the basic schemes that are actually are used quite extensively, at least for the transmog qubits. What are the basic schemes? Well, like spin echo, for example. Yeah, that's a very simple. That's the ender CPMG. So there's some very basic things. So it's not like it doesn't take a new theory to develop these. That's what I wanna say. Thank you. Okay, maybe one other quick question. Okay, well, if not, let's thank Barbara again for a lovely talk.