 I'd like just to remind you that at 3.15, we have a panel on careers path here. So I think it would be a good discussion, so please stay tuned. And now we have the lecture by Jungun Ha, and he was, he did his page at Caltech, then he passed as a postdoc at MIT, and now he's a senior researcher at Microsoft, and he'll tell us about topological codes. Thank you, I'm very glad to be in front of the future enthusiast in the quantum computing. My general title of this mini course was topological aspects of quantum codes. Now you may wonder topology, what's topology? Well, technically it's a satisfy the four axioms, the empty set and the total set. Arbitrary union or the finite intersection are all in the topology, it's a collection. Take a little louder, I can barely hear you. Does it work better? Yeah, if you want. Okay. Yeah, so in mathematics topology has a very specific meaning, it's a collection of open sets and you will, it's a definition of what open set means, but the definition has now become somewhat much broader, and now when people talk about topological codes or topological phase in physics or coding literature, it no longer refers to any open sets, but rather some phenomena that you want to see that has a discrete nature out of many small constituents, but that's precisely what topology is about. You are defining the topological spaces by a local data, and then you wonder about the global structure, and you end up deriving some discrete data out of that, the topological invariance. So in that sense it is called topological. So I hope you have some sense what the topological, the word adjective topological has been used in the literature after these four hours. So let me, so this is a rough plan, for today we're going to derive some bounds on the local codes. Of course I will define what local codes are, and then we'll talk about the one canonical example of topological codes, the Toric code, and introduce the connection to the homology. I do not assume your knowledge about homology, so don't be afraid if you don't know. And then in the third hour I will explain, we will look into the complexity of generation of a particular code state. So this is akin to the complexity theory of the questions, namely there's an object that you want to estimate the quantity of, and then you want to show that it is hard in a variety of metrics, and we will see such an instance, in this case the number of local gates elements to create the target state from a trivial state. And then in the fourth hour we will delve in slightly, switch the gear and talk about the transfers of gates, primarily focused on the T gates. There is a topological element, so I included this. So roughly speaking, the local codes, this is not with a technical term, so I don't have to write that down anything. Local codes means that a code that is defined by local data. That's all it means. You have to figure out what it means to be local. I think throughout my hours the local means that you have some metric space in game, a very specific metric space, usually a plane or a line, so some low dimensional Euclidean space you can think of, then that's enough. And in that context the local means that something that acts or probes some nearby degrees of freedom, nearby few qubits. This is different from the local Hamiltonian that you heard in the first hour of today in the Sandys talk, that their local refers to just some operator acts on a few qubits. There's no underlying metric space. Your notion of locality is just a number local. Here I assume geometric locality, there is underlying metric space always. Okay, let's start with the classical singleton bound. It's a crude bound, a singleton is a name. It reads, there are more general versions, but let me just focus on the linear codes. So classically a linear code, oh, this is by the way definition, a linear code is a linear subspace in your binary vector space, that's it. That's the linear code. So obviously there's a zero element, there's some non-zero elements, and every element in C will have some zeros and ones and so on. And people speak of minimal distance, minimal distance between any two elements in this set, in this C. But what do you mean by distance? This is not a metric space yet. The distance, here I mean the hamming distance. So the hamming weight of a vector is just the number of ones, that's it. And the distance between two things is going to be the hamming weight of the sum of the two. Well, since we're working with the binary field, there is no distinction between subtraction and addition, they're the same thing. So you count the number of positions, one, two, three, four, five and so on, where V and W are different. And since this is a linear code, the minimal distance is going to be the minimum weight of a non-zero vector, which is the distance from the all-zero vector to a non-zero vector. So that parameter is the D, minimum distance of C. And N is the ambient dimension, N, the number of components in the column, K is the linear space dimension of C, that's it. And the singleton bound reads that your distance cannot be too large if your coding is encoding large number of classical bits. Why is that? I drew a column vector, but let me write it in a row vector. So this is a typical element, some general elements of C. And then imagine that we just crossed out a few bits. How many? I want to cross out D minus 1 bits. You just enumerate all the elements in the linear space and then just erase them and look at the remaining part. Can there be any collision? Meaning that after erasure, the two vectors look exactly the same. Yes or no question? No. No. Right. Why? Because by definition, if there were collision here, then they are the same. So they can only differ in these coordinates, but the number of them is D minus 1 less than the minimum distance, violating the definition of D. So there is no collision. So you have a map from C down to F to the N minus. How many coordinates did I delete? D minus 1. And this map is injected. So the dimension in the domain must be at most the dimension in the code domain. And that's the singleton bound. Now this is a purely classical one. And there's a quantum analog. And in that one, we want to prove. So the quantum singleton bound reads that two times the code distance is less than N minus K. Now, I did not define what the quantum code is, but it is now, if you can remember Nicholas talk last week, and I'm using the same definition. So I'm only thinking about the Pauli stabilizer code. It is defined by a group of tensor product of Pauli matrices. There will be some number of generators, and they should all be commuting, and there should not contain minus identity. That's the definition of a Pauli stabilizer code. And the number of generators for the Pauli stabilizer group has a simple relation with the number of encoded qubits that reads like this. The number of encoded qubits is exactly the ambient number of qubits minus the number of constraints. So I'm just reviewing what Nikola has told you last week. So in this context, we have N and K, well defined, N is the number of qubits, K is the number of logical qubits encoded in that code space given by this formula, and D is something different. Previously, D was just a minimum distance between any distinct two elements. In the quantum setting, similar thing holds, but now the interpretation is different. Here it was a code word themselves. Here we are talking about the logical operators. So the logical operator is going to be the commutant of the Pauli stabilizer group in the group of Pauli operators, full group of Pauli operators, and the logical operator is identified with all Pauli operators that commute with the stabilizer. That's the definition. And of course, the meaningful part that you want to focus on is the modulo stabilizers. So D here is going to be the Hemingway difference, the positions of number of qubits where two logical operators differ in their parallel tensor components. That's the distance. It's the same definition as Nikola has used last week. So we have all the definitions of those three parameters, and we get inequality. That's the claim of the quantum singleton bound. And the difference from the classical case is that there's a coefficient two. Two appears in many places in the quantum setting when you compare it to classical one, and this is one instance. So my first goal is to prove that inequality. And the more important is not the inequality itself, but the derivation. There's a recurring theme of ideas that applies to deriving those inequality. So to that end, I want to introduce what is called cleaning them. Statement is this. So we have some number of qubits. Oh, by the way, I drove a square, but there's no metric space underlying yet. Whenever there is, I will mention it, so don't worry. So suppose there's an n qubit. I suppose I have some sub-set of qubits called a. And I demand that I assume that a is correctable. I'll be speaking of the word correctable many times. And here is the definition. Any logical operator supported on a, it has to be the stabilizer itself. It's a property of a, realized through all the logical operators, supported on the region a. And I demand that the condition is that it has to be a stabilizer. An equivalent definition is to think of the code space projector. If you have studied any representation theory, then this formula might be very familiar to you. Some of all elements in a group normalized by the group order. You can show that this is, and whenever it is represented as some unitary representation, which is, because a poly-stabilizer group is a sub-group of poly-group, each g is a unitary, and I'm taking a linear combination of unitaries. You can show that this is a Hermitian and this is a projector. So this is my code say projector. An equivalent condition is that whatever operator you give me, not necessarily a poly, whatever operator you give me that is supported on a, acting on a, and tested with identity elsewhere. I do not write identity components. Sandwiched by the code space projector is in some operator, some large matrix, two to the n by two to the n matrix. It's going to be a scalar multiple of the projector itself. So when you read this equation as a map, so I project down to the code space, I apply some, the given operator, and I project back to the code space, then the overall action is, well, I remain the same as I started, but the action was just a scalar multiplication. The most boring operation you can think, yes. No, no. Supported means that, you know, if you have a bipartite system, say A and B, then how do you construct the operator algebra on acting on those? You take all the matrices acting on A, all the matrices acting on B, and you take a test product of two matrix algebras. O of A means that it may be non-trivial on A, but it has to be identity on B. That means, yeah, that's the definition of when I say an operator is supported on A. No, A is a subset of qubits. There was another question somewhere. No, okay. Yeah. It was not planned, but let me just prove this, the equivalence, because that's going to be recurring afterwards, I think. So I think it's important that it worked better. So, yeah. Whenever you encounter operator equality in the context of how this stabilizer codes, think about the expansion. So the one nice thing about the, well, this is actually very reason why people first invented the poly-stabilizer codes. Poly operators form an operator basis. You can write down any operator as a linear combination of some poly operator acting on the whole system. In this case, now, each of them is acting on A, supported on A. Sorry, now I'm mixing up the subscript versus superscript, but I'm sure you'll understand that. Now, suppose I have that, oh, yeah. Let's calculate this quantity, sandwiching the codes with projectors. That's the trivial linearity. Now, it's a projector, so what can I do? I can insert any element g from my stabilizer group. If you remember the sum over all elements formula, then since it's a group, whether you multiply some elements from the outside or not, it stays the same. One plus, oh, separate the generating set versus the group. S is the group. So in particular, g may be identity. So I can do the same thing here in the projector, no problem. Another nice thing about poly operators is that they either anticommute or commute. There is no other possibility. So if pj commuted with g, then it will just stay the same. If it anticommuted, it will pick up minus sign. But how come? So the sum will be decomposed into two sums, one that commutes, one that anticommutes. The anticommute part may pick up minus sign whenever they want. How could that be possible if they're 0? Clear? I mean, if x plus y equals x minus y, how can it be true? y is 0. So the sum is constrained to the commuting operators only. Now then, p sub j a is all logical. But if I assume that a was correctable, then they all should be a stabilizer, which means they all are absorbed into pi. What you're left with is just the coefficients. And that's precisely that formula. And you can see the converse is also true that way. So yeah, correctable means that either absence of non-trivial logical operator on the region or the action of the code space is scalar multiplication. All right, that was a bit of a digression. Now let's get into the proof towards the cleaning lemma. I want to decompose my stabilizer group into three pieces. Those that are supported on region A and those that are supported on region B, the complement of A. Is there any intersection between these two groups? Well, by definition, something has to be supported on A exclusively and at the same time supported on B exclusively. So the only possibility is identity. So they are direct sum. And oh, right. Now the third components of the nice thing about the polymatrices. We are working over the binary cube bits, not arbitrary four-dimensional space or six-dimensional space. The two is a prime. So the polystabilizer group or in general the group of poly operators, if you forget all the phase factors, then it becomes a vector space. Multiplication in the operator corresponds to addition in that vector space. Because the commutation relation only gives you a sign, which we're ignoring. So it is a vector space S, a subspace in that vector space. And the vector space is important that whenever you have a subspace, it is a direct complement. It is direct cement. So there is some subgroup that complements this inclusion and becomes equality. Here, the fact that we are working over the cube bits is very important. Cube traits fine. Four-dimensional degrees of freedom, not fine. All right. Now, sorry? S, a is a subgroup of S, consisting of those supported on A. Only on A. B, similarly. S prime is the direct complement. The choice of S prime is not unique. And as much as you can take any transverse subspace in a vector space. Now I want to consider the projection onto a truncation map from the full poly group onto the full poly group restricted on NA. So whatever your poly operator gives me, say X1, Y2, I don't know, Z3. If A is the first cube bit, then I only obtain X1 and just erase the latter part. In terms of vector space, I had some components and I zeroed out except for the first component. So let's apply this map to this. What happens? Obviously the image will contain those already supported on A and it will completely zero out this B part by definition. And so there is something that could come from this S prime piece. But is this map somewhat special? That's my first question. And my claim is that it is injective. Why? We don't have to write anything here. If anything here becomes zero here, then by definition it means that the starting point was supported on B. So it should belong here. So by definition then S prime, whatever currently it may be, it belongs to the S prime intersect with SB, which is zero. So this map is injective. So this part survives. And the same applies for the B part too. Now let's do some basic linear algebra. Where's my eraser? Oh, that's my hand. There could be a generated, well, there could be a component that straddles between the two. Like so my stabilizer group, suppose, was generated by the single element this. So it's an order of two group. And this is A and this is B. Then what is SA? Is there any element that is supported exclusively on A? No? What about B? Nothing. So the whole group is equal to S prime. I didn't argue anything yet. I'm just identifying that the image of under this pi of A on S is equal to this and that. And this space is isomorphic to original S prime. That's it. As a linear space. Now let's interpret, oh, I realized I didn't even state what the cleaning lemma is claiming. I'm sorry. Let me do it here. So yeah, if A is correctable, then this is the claim. Then B supports a complete set of representatives for logical operators. So whatever you want to do with the code, you have to access to the logical operator. And you can always do that by accessing only B part without ever touching A part. So why is it cleaning? General, if you give me any logical operator arbitrarily, then it will be have some component in A piece and some component in B piece. And since it has a complete set of representatives, logical operators always comes in an equivalence class, module of S. So it means that I could always find the equivalent logical operator that is equivalent to the original one you give me. And that new thing is supported on the complement of A. Yeah, let's not argue with the terminology. It's widely called correctable. All right. So yes. Compliment. So what does it mean to have a complete set of logical operators? It means that if you look at the logical operator as supported on B part, you find everything. And then you mod out by the stabilizer supported there, then that should be the number of logical qubits. So quantitatively, 2 times k, because one logical qubit is specified by the pair of logical operators, not one. 2 times k should be equal to the dimension of how do you find the, yes? Just go forward. Oh, oh. Does it better? Yeah, how do you find the commentant? If I give you an exercise sheet to find the logical operator, what would you do? It's a Gaussian elimination. You write down all the bases for the poly operator and you find the kernel of that map. So it's going to be linear equation solving, which means, oh, let me say again. So when you want to find a logical operator supported on a specific set, then you look for the restrictions of the stabilizer on that region, because that's all that counts. And you look for the commentant of that from how the operators and mod out by the trivial logical operator supported on that region. Now, the linear algebra part that I noted earlier is that you're solving linear equations. And the quantity in the parenthesis is giving you the constraint. And where are you solving the linear system of equation? You're solving in the poly group supported on a, which has dimension 2 times the number of qubits in a. Oh, oh, oh. Why is it always equal to 2k? Because, oh, that's the goal, actually. This is the goal. I'm sorry. Yes. Because the claim is that b is going to support everything. So I need to count the number of logical operators supported on b, which is a complement. Everywhere b, let me write b. Yes, just dimension counting. On the other hand, you can do the same calculation for a part. And we know by assumption that the result must be 0. Why? Because I just assume that whatever logical operator supported on a must be stabilized. So 0 is 2a. Let's add them up. So the quantity that we are interested is equal to 0 plus the quality. And that is 2 times the, I just abbreviated my notation for the bracket means dimension. I just literally added two lines. That's it. But we have better idea about those quantities by that calculation. So what are they? They are, this is equal to SB part dimension of SB times dimension of S prime. This is equal to SA plus dimension of S prime. Because PA on S prime is injective. That's the place I needed that. So now let's just the A number of qubits in A plus number of qubits in B is the total number of qubits. SB appears 1, 2 times. SA appears 1, 2 times. S prime appears 1, 2. So the quantity we were wondering is exactly equal to 2k as promised. So that's the cleaning lemma. If you have a correctable region, then the complement have all the resource for you to do anything you want on the logical space. Now let's prove this quantum singleton bound. The first step is to show that my number of qubits is at least twice as large the d minus 1. Why? If not, I should be able to find a tri-partition where this is d minus 1. And this is d minus 1, just a pigeonhole principle. Now d minus 1 is by definition smaller than d. And d was supposed to be the minimum number of qubits that you have to access to enact any logical operation. So B union C is correctable by definition of d. Similarly, A union B is also correctable. Now let's apply the cleaning lemma. Applied to A union B, the conclusion is that C alone supports a complete set of logical operators. By symmetry, A alone supports a complete set of logical operators. Now if this code encoded any logical qubit, then I hand you, say, A part and send it to some other galaxy. And I retain part C. The observer in the distant galaxy can do whatever they want in the code space. And I can do whatever I want in the code space. But that's precisely what no cloning theorem says impossible. So that's a contradiction. It's a consequence of the linearity of quantum mechanics. So this inequality is checked. Now since I have a sufficiently large number of qubits, I can now define a tri-partition, A for the consistency of later discussion and label A, C, B. And A alone consists of D minus 1 qubits. And B alone consists of D minus 1 qubits. And C is possibly empty. But this configuration is feasible. Now comes an interesting part, some information theory argument. Suppose that I bring a reference system such that the code space is maximally entangled with my reference system. Reference system has no special structure. It is just a C to the 2 to the k. Just a k qubit worth of hyperspace. And it's maximally entangled. Now I claim that the mutual information with respect to that maximally entangled state between A and R is 0. And at the same time, the mutual information between B and the reference system is 0. That's my claims. How do I see that? Well, I don't know how to calculate the entropies of this abstractly given subsystem. That's too hard. But what I can show is that I can calculate the correlation function. Sorry? No, no, no. This is the first step in the proof. In the second step, I'm running the second step. Oh, R is maximally entangled with the code space. So you pick some whatever your favorite base is in your code space, and then write down the maximally entangled state with R. So physicists use the word correlation to mean a very specific thing. I have to provide two observables, OA and OB. And the correlation means that OA times OB, and you take a trace with the expectation value with respect to the given state, minus this quantity. Usually denoted as OA OB. Some authors put a double bracket. Some others don't. This is the correlation between OA and OB, that definition. I leave you an exercise to show that if this vanishes for all OA and OB, then the mutual information between is zero. This is your exercise. But let me just explain why this should vanish. Rho being the maximally entangled in the code space means that my code space projector commits with Rho, and moreover, it is absorbent. It's a state. I just hit it with the projector. It does nothing, because my state is already there. But it's more than just words, because now pi times Rho is an unphysical quantity as an operator, but mathematically, it makes sense. So then what can I do? I can insert pi here and there. And using a cyclicity of trace, I can move this pi over here. Let me change B to R. I'm sorry. Pi and R does not have any overlap. They are just completely different things. So pi moves over. Now we get the familiar quantity. Remember, the correctable criteria means that operators sandwiched by the projector is scalar multiple of the projector. So this whole thing becomes a scalar. And you do the same thing here, and you show that it is zero. So the mutual information becomes zero between corrective region and the reference system. Now you do the usual information theory tricks. Yes. What does that mean? Oh, the mutual information. Oh, you take. Oh yeah, actually, I needed that whole expansion. So this is zero. And the similarity, this is zero. Now let's specialize. We know this quantity. It's a reduced mass matrix on the reference system that has no structure. But all we know is that it is maximally entangled with the rest. What's the entanglement entropy of the maximally entangled states? Is that the dimension? Well, log dimension. So this quantity is equal to k. And the whole state, the maximally entangled state, is a pure state. One nice thing about the quantum polynomial entropy is that if it's a pure state, the entropy of a region is equal to the entropy of the complement. So AR, what's the complement? BC. BR complement is AC. Now everything is expressed in terms of A, B, C, and k. So after some manipulation, which I'm not going to do, because you just add the two, and then apply the inequality that this is less than an absolute value, less than B times the sum of the two entropies of the participating regions. It's a sub-editivity of entropy. The conclusion is that the k is less than or equal to the entropy of C. But how large can an entropy of C be? I erase the D minus 1 part, another D minus 1 part. And so the number of qubits in C is N minus 2 times D minus 1. And that's the maximum entropy possible. And this inequality is proved. So far, there's no locality or anything. But now let's apply this to a local code on the torus. And that will be today's power 4B. Yes? Ah, oh. Yeah, there are just a local variable that stands for generic region. There's no consistency over the hour. Sorry. ABC is literally just this tripartition, where the only condition is that A consists of D minus 1 qubits. B consists of D minus 1 qubits. That's it. Cleaning lemma is used in the first step. Oh, yeah. Cleaning lemma is just used here. Without this step, I could not have this three tripartition, which didn't make sense. What did I prove in the second step? This. Well, more importantly, this. Exactly. So let's exploit the idea in the second step. Because the dimension of the system R is 2 to the k. And the entropy on R is k. No, I did not put any restriction on the size of C. C is just possibly 0, but it's a loud tripartition. And then I considered, well, the introduction of the maximally entangled state with the reference system is a completely argument-sake. It's not given in the problem. But by introducing the reference system, I was able to relate the cost-speed dimension with entropy. That's the step. So, well, the cleaning lemma was important in the first step. But I find the idea in the second step more interesting, that you can relate the dimension of the cost-space by the entropy of some auxiliary volume. That is complement of two correctable region. So let's imagine that now we get a torus. So this is torus, two torus. And I divide the system into three pieces. So it's a torus, so it's periodically identified. A1 is a disk-like region. A2 is its own disk-like region. B1 and B2. And C has four components. C2, I left you to a joyful hour to imagine this into two torus embedded in three space. So I just arbitrarily divided my two torus region like this. And let me introduce an interesting assumption. You may wonder, why do you assume this? But let me just assume it anyway. Let me assume that region A, well, region M, assume that M is correctable whenever if you take a small distance neighborhood of M. Now we're talking about metric space. So there's underlying metric. The torus is an italized metric, as you see. If this was M, then you take a small neighborhood around M. So you literally collect all the points that our distance are or less from M. Let me assume that if this thickened M is homeomorphic to a disk, then I say, I assume that M itself is correctable. You may think it is too strong, or we can argue about this assumption. Then what happens? Then what happens? A1, well, if you thicken a little bit, it will creep into like this. This is the R neighborhood of A1. That's disk. So A1 is correctable, similarly for A1, A2, AB1, B2. All ABs are correctable individually. Now let's maybe assume one thing more. Suppose my code is defined by the local stabilizer generators, meaning that every generator in my stabilizer group is supported on a disk-like region of diameter, say R, where R is a some fixed number. I'm talking about torus, and I'm imagining torus code. Where did I put the razor? Now let's back up a little bit. And then I want to exploit the fact that my code has a locally generated stabilizer group and the consequence of that. So my system is roughly like this, where A1 and A2 are individually correctable. Now I ask, what about the union? I claim that if they are sufficiently far separated, then the union is also correctable. Why? If there were a logical operator supported on the union, then it is a polyoperator. So it has a ton of components that are supported here and there. Now the assumption that P is a logical operator means that it commutes with every generator of my stabilizer group. But where's the generator? Generator can see only A1 piece or only A2 piece but never simultaneously. The commutation relations should hold individually. So it means that A1 must be the logical operator 2 and A2 must be the logical operator as well. But we know by assumption, by this assumption, that A1 is correctable. So this is trivial. A2 is correctable. So this is trivial. Now P was a product of the two trivial operators. So P has to be trivial as well. So I show that the union is correctable whenever they are separated, whenever they are not acted upon by any generator of your stabilizer group. So A1 union A2 is correctable. AB1 union B2 is correctable. Now you recall this inequality. In the step two, I did not use anything but the fact that A and B are both correctable individually. Now I have two subsystems that are each correctable. So the code space dimension, the number of logical qubits, is bounded by the volume of C, which happens to be four-component set, but the number of qubits in C bounds in the number of logical qubits. So what's the size of this C? What was the requirement that I introduced C? It was needed because I want to take a fattened version of A's. And whenever they are, they do not intrude the other part, then we're fine. So A, the size of C, each globe has to be only order of this R, the size of the stabilizer group, I'm sorry, the generator. So if that size is constant, then you can only encode a constant number of logical qubits, no matter how large your torus can be. Remember the two conditions. I assume the correctability based on the topology of the region you're considering. Slightly fattened one should be a ball. And I also assume the locality of the stabilizer generator. Let's do another example for, oh, actually, this is time. I heard that there are some people in the audience who are studying quantum topology, and this example might interest you. Let's consider a sphere. The argument will work in arbitrary dimension, but for the sake of concreteness, let me just focus on two, sphere. Consider the northern hemisphere. Topologically, this is a disk, and our neighborhood is also a disk. So northern hemisphere is correctable. What about the southern hemisphere? The same thing. So both hemispheres are correctable, and I have managed to divide the whole system into two correctable regions, leaving nothing behind. Number of logical encoded qubits under those two conditions that your correctable region inherits some topology of the regions, and the fact that my code is locally generated, then on the sphere, you cannot encode any qubit. It has to be just unique state. No. Where did I use? Well, A and B didn't have any space in between. The role of C was to separate the different components of A. That's it. But on the sphere, I was able to just use one region for A, and another one disk for B. So sure. Number two, it's an assumption. I mean, I just consider, well, it is partly motivated by the lazy person's construction of local codes. You want some homogeneity. You don't want to introduce too much data into one construction. And one convenient construction will lead you to such a condition. The second condition is the definition for local codes in this hour. Yes? Oh, well, maybe it's not a standard notation, but I literally mean some metric space. You collect points whose distance from M is less than or equal to R. So plus, well, this would be a disk. But if my M was like this, then it will be something like this. If my region was like this, then it will be a disk. Disk is anything that is homomorphic to literal disk. Region does not generate a code. Region is a part of the data. Well, part of the data you want to investigate when you analyze a code. Code is defined by the stabilizer group. And for which you have to specify the generators on. No, region is a subset of the underlying space, which is whatever space you are considering. Here, the underlying space was a torus with a metric. Here, it was a two-spere with a metric. It is not directly related. Code space is another construction on top of that underlying space. It's an extra data.