 Okay, thank you so much. It's a great pleasure to be here and I also thank the theoretical sciences visiting program for making this visit possible. So today I'll tell you about this subject that I'm getting into relatively recently. So I've been thinking about error-correcting codes for maybe three to four years and then this quantum part is more recent. So this will be in fact my first talk on quantum codes. So I try to give a version of this talk to my wife earlier two days ago and she almost divorced me. So she actually she didn't like my presentation at all. She said you have to simplify this, this is too much, I can't understand it. So this is the second version that I prepared. I don't know if I did a good job or not, but instead of starting the easier way that I thought in the first presentation, I started decided to start with the more difficult part first and hopefully it'll get easier and then it will get complicated again. I hope I'll do a good job though. So let me let me start with the technical one of the technical parts of my talk. The posets are already in the title of my presentation. So a poset is short for partially ordered set. It's a set with some ordering on it. We order elements of this set p and basically there are three axioms for this order. These are the conditions that make this ordering meaningful. Obviously if we order anything, for example, we look at our kids, we like some of them better than the other kids. So a joke on the side, we want the following conditions. Every element should be less than or equal to itself. If an element is less than or equal to another element and if that other element is less than or equal to the first element, they better be equal. This is a natural thing to ask for. And also we want change activity. If a is less than or equal to b and b is less than or equal to c, then that should imply a is less than or equal to c. So we see examples of posets in our data lives. For example, ancestors start multiplying the kids and those kids set kids and then suddenly a tree grows like this, branches out, but essentially it's a partially ordered set. The smallest element of the partial order set are our Adam and Eve. So it's a funny thing. This is a simple mathematical object. However, it's surprising. It still amazes me, the number of partial order sets with an element is still a big open problem in combinatorics. The answer is not no. So let me give you some examples of posets where we can represent them by diagrams. It's easier to remember that way. So finite posets are easily remembered by their so-called graphs or has a diagrams. Let's look at these small cases. If we work with a poset with one element, of course, there is only one node in our graph. If you have two elements in our set, then we can only have two in a coolant poset structures on that set. The first poset has one element less than the other. The other poset has both of its elements incomparable with each other. If you look at poset structures on three elements sets, there are five of them. And these are the has a diagrams for n equals to four that are 16 and the number grows fairly quickly. All right. So now let's move on to closer to the computer science. So there is an analogy. In order to be able to drive a car, you don't need to be a mechanic, just like that. In order to be able to use a computer, you don't need to be a computer scientist. However, it's still good to know how these computers are structured and how they work. We learned that in undergraduate that a typical computer has three basic units. The first unit is its brain, CPU, central processing unit. And then there is the main memory unit. And the third unit consists of input and output controllers. These are screens, mouse and keyboards and so on. To have the brain of our computer, we have these smaller memories called random access memory, RAMs. And there are many types of RAMs in our computer. And one type of RAM called dynamic random access memory is essentially responsible for very fast performance of our computer. And the way that it works is that it stores information on the electric capacitors. So this makes it very fast to transmit small memory units back and forth instead of saving them into main memory units. And since everything is stored electrically on capacitors, this is very sensitive to background radiation and external factors. And also, as you can imagine, if you pull the plug out, the RAM doesn't work. It's not like the memory unit of your computer. So one another interesting thing about DRAM that I recently learned while preparing this lecture is that there is a big industry around DRAMs. So it turns out that the market size for DRAM in 2023 is over 100 billion US dollars. So companies are investing quite a bit on production of DRAMs. Going back to sensitivity to background radiation, this is a very old report from IBM from 1990s. They mentioned that RAMs typically experience one cosmic ray induced error per 256 megabyte of RAM per month. And this was in 1990s. Of course, now with modern technology, we know that these chips are much smaller. So errors are bigger. So this error rate has increased, actually. It didn't decrease. Now, another fact is that radiation-related errors increases as you go to higher attitudes. In fact, compared to sea level, if you go up 10 to 12 kilometers high, the average neutral flux increases 300 times more. This means that computers in high attitudes experience more errors than sea level. And this is even worse for quantum computers because I guess there are many different ways of handling quantum computers, but one way is to use superconductors to save bits. And these superconducting circuits are more sensitive to radiation. So even if you are at the sea level, quantum computers are prone to error more than regular DRAMs. So this motivates our discussion. Now, so I would like to get into error correction business. Now, I was looking for good analogies to explain what error correction was about. And while searching math educator stack exchange, I came across with this person's comments. They are very nice. So he suggested following analogies for introducing error correction to the students. He said, you know, as we are teachers, academicians, we try to explain stuff to our students, but we often give more examples than necessary, often redundant, thinking that some of the information doesn't reach students. Okay, so there is a lesson to be learned by providing excess information here. Or marriage was in love letters. Lovers often struggle to express their feelings, so they tell more than necessary about themselves. Again, this is repeating something. And finally, we see the main idea of error correction in mafia movies. The boss says that, Fredo, go take care of Antonia Balucci. I want him iced. You know, accidents happen. Make him sleep with the fishes. So the point here is that it has too much too many, too much information, hoping that, you know, the Fredo catches what's intended to be said. And this is really the main idea of error correction. We give redundant, we add redundancy to the message, and then submit it. And the other end corrects looking at the redundant information. And more diagrammatically, that's how it is done. We have a source for our message. We encode it, send it through communication channel, decoder receives it, and decodes and receiver gets it. For example, here, let's say we have our message xxxyxyxy, our encoder encodes into bits, sequence of bits, submit through an adverse information channel, and some of the bits are flipped. And we get the wrong sequence. And then receiver has to figure out what was the original message. All right. So let's assume that our receiver knows the set of all possible code words that it may possibly receive. Here we have our lookup table. We will call this a code. In our lookup table, we see all these strings, and most of these, all of these are in the lookup table. However, one of them is not. This is not in the lookup table. So receiver quickly learns that an error occurred during transmission. This is called error detection. And now, so although you can build codes which do not rely on any external structure, it's better to have some sort of algebraic structure on your code so that you can manipulate the code faster to detect errors and correct. So from now on, we will focus on codes or lookup tables that have some additional algebraic structure on it. So by the way, if you forgot, but vector spaces, so let me quickly refresh your memory. In calculus, we learn vector spaces, but over real numbers. A vector space is a collection of vectors. They all have same number of coordinates. In calculus, we learn vector spaces over real numbers. So we can add two vectors, subtract two vectors, or scale them by real numbers. In our error correction business, our alphabet does not consist of real numbers, but rather some discrete objects. So I will focus on these discrete objects that behave like real numbers. They are called finite fields. And I'll discuss vector spaces over finite fields. So zero and one are elements of the most basic finite field, which I will denote by F2. F2 as a set has two elements, zero and one, and we do modular arithmetic on this set. So we add entries of F2 according to this table and multiply them according to this table. More generally, we can consider a finite field with p elements where p is a prime number. And as I said, Fp consists of numbers from one to p minus one, where p is our prime number. And we have the addition and multiplication operations similar to these two tables. And if we want to work with fields which has more than prime number elements, then we can only do it at the expense of taking a power of a prime number. This is a mathematical fact. If you want to invert multiplicatively elements of the finite field, you have to stay in this range of prime power numbers. So finite fields with q elements where q is a prime power are obtained as quotients of polynomial rings. So actually, this is not something quite grossly abstract. If you want to find non-zero elements of a finite field with q elements, you can look at the complex numbers and look at the unit circle in the complex plane. And non-zero elements of a finite field with q elements essentially are complex numbers of this form. So these are roots of unities. For instance, if I want to study finite fields with 25 elements, the non-zero elements of that field will be precisely the 24th roots of unities scattered uniformly around the unit circle. And I can multiply these numbers according to complex multiplication, but addition is more complicated. That's what makes it more technical. However, we can do the same arithmetic that we can do with the real numbers. We can add them, multiply, divide, subtract. All right. So what's a linear code? A linear code is simply a vector subspace of an n-dimensional vector space over our field, just like vector spaces in calculus. We just in three space, we can't take planes or lines. Here, our three space might be a three-dimensional vector space over a finite field. Now, it's good to know linear algebra because it tells us that you can translate everything into simple matrix algebra. So instead of working with geometric, maybe counter-intuitive higher-dimensional vector spaces, you can translate everything into matrices and use simple arithmetic on matrices. And we learn in linear algebra that if you take a k-dimensional vector subspace of an n-dimensional vector space, you can write it as the zero set of some matrix multiplication. So in particular, every code, which is a k-dimensional subspace of an n-dimensional vector space, after all, can be written as a set of vectors that we can make zero upon multiplying by an n minus k by n matrix. So for every code c, there is such a matrix, h, and c is uniquely determined by this matrix h. And this matrix h is called parity check matrix. Why is it called parity check matrix? Because if you receive an error, you would like to check quickly whether that vector received message is in your lookup list or not. So in computer, the correct way of doing this is you take an element of your lookup list compared with the received message. If they are not equal, you move to the next vector in the lookup list and so on. So that's very, very costly algorithm. Instead, you can just work with one matrix h. You can multiply your received word with this matrix h. If it is zero, it's in your code. If it is not zero, there is an error. So h is a very quick way of checking whether a received code word is in the lookup list or not. All right. So this is very useful. So this is one point where a linearity plays an important role. We can use matrices to check whether received messages are in our lookup list or not. But there is more. So I guess I wanted to give an example here, a typical code over a finite field with four elements. So here I'm going to look at f4, which has two to the two elements. And as I mentioned before, non-zero elements of our finite field lives on the unit circle. So these are precisely the third roots of unities. You can write this in terms of cosines and sines. And so you can multiply these primitives of unities. Take it square, you will get complex conjugate number. Here is the multiplication, essentially. To add them, you can use this table. All right. So now our code, so this is one code that you can build on a finite field with four elements. So I'm going to consider vectors of length six, and this code should have this particular matrix as its parity check matrix. Now, once you know the parity check matrix, you can quickly find the generators. The generators of our matrix C are given by the rows of this other matrix G. Once you know the generating set, you can determine how many elements this code or look up this there. Since it has three linearly independent rows, our code is going to have four to the three, 64 code words. Turns out that with this code, if you use this particular code, you can detect and correct single digit errors in the sent message. So I would like to explain how this is done. So I move on to the next notion. Here, metrics or distances will play an important role. What's a metric? A metric is a way of formalizing the notion of a distance. And then we can use metrics to study convergence, continuity, boundedness, topology of our underlying space. In this context, the most important, I shouldn't say most important, the whole point of my talk is that this particular metric is not the most important one. There are many other metrics to study. So there is one basic distance in coding theory. It's called the Heming distance. And Heming distance or Heming metric is defined as follows. Take two vectors of equal length. The distance between these two vectors is the number of coordinates where the entries of these two vectors are different from each other. So this defines the distance. Then we can define the weight of a single vector as simply the non-zero coordinates of that vector. This is essentially the distance between the vector to the origin. Okay. Now that we have the notion of a distance for the codes over finite fields, we can talk about balls and sphere, some geometric objects. So pecking radius of a code is the largest integer such that the balls of radius are centered around the code words are all pairwise disjoint. Remember, C lives in n-dimensional vector space, but it's k-dimensional, meaning that C has fewer vectors than total number of vectors in this big vector space. And I'm putting balls around each of equal length balls around each of the vectors in C. I'm blowing those balls until they almost touch each other as soon as they touch. And if the radius is integer, then I would like to take that radius as my pecking radius. I'm pecking the space by these balls centered around vectors in C. Let me give you an example. Now I'm looking at a two-dimensional vector space, fq2, and brown nodes represents the vectors of my code. And I'm putting balls around my code vectors, code words. So see, these are unit balls. If I take radius two balls centered around code words, these guys are going to touch each other and they will have a common point, but I don't want this. I want them to be mutually disjoint. So this is the largest integer radius. One is the largest integer radius such that all balls of radius are around code words are disjoint. So pecking radius for this particular code is one. Now let's say we send a message using this code and we send three vectors. Two of the code words hit on the nose are code words in C. But one of them landed itself on the unit ball. Not quite a code word, but it landed outside C. So then our decoder starts to realize that there is an error. So okay, now we found that there is an error because this is not in the code space C. How do we fix this error? We simply send it to the nearest code word. And our error is corrected by sending by nearest neighbor decoding algorithm here. All right. So as I said, pecking radius is the common radius, smallest integer that all balls around centered code words are disjoint. And this is closely related to minimum of all distances between elements of C. If you stare at this picture, you will realize that here the minimum distance is going to be one, two, three, four. For this example, the minimum distance between code words is four. You go this way, four units, or you go this way, four units. So this is the minimum distance between two code words in C. And pecking radius is simply the floor of minimum distance minus one over two. Indeed, here we had four. Four minus one is three. Three over two is one half. This floor is one. All right. So now we can make this theorem definition. If we take a code in an n-dimensional vector space, and our code has k dimensions, and let's say the minimum distance of code words in C is D, now C can detect and correct at least D minus one over two floor errors, just like in this example, because if an erroneous message falls into this ball of radius, pecking radius, then we can't fix the error by simply sending this wrong code word to the center of the ball. So if this happens, if the minimum distance is D, then we say that our code is an NKD code. So these parameters, NKD, are related to each other, and this is one of the most fundamental theorems of error correction. This is known as a singleton's bound. It says that if you take, in fact, this theorem works for not necessarily linear codes, says that if you take a linear code in an n-dimensional vector space, then the relationship between the size of D and the minimum distance is given by this inequality. In particular, if you have a linear code and KD code, then this K is less than or equal to N minus D plus one. It's a very fundamental inequality, very easy to prove. And as an immediate corollary of this, actually, once you stated this inequality for a while, you ask yourself, well, are the codes making this inequality, equality, important? That's a very, very basic question to ask. And the answer is yes. These codes are maximum distance separable, meaning that once you fix N and K, D has to be maximized. So it has the name for these codes. So bigger the minimum distances, more errors you can correct. Right? So obviously, you want this inequality to be an equality to get, so to speak, better codes. All right? So we'll call such codes MDS codes. Most basic example of an MDS code was discovered in 1960 by Reed and Solomon. So hence the name. They're called Reed-Solomon codes. These are MDS codes. They're essentially obtained by taking polynomials of degree at most K minus one and evaluating it the non-zero elements of our underlying field. We simply pick one generator of the non-zero elements of our field called alpha and then take any polynomial of degree at most K minus one and evaluate that polynomial on the powers of alpha. You will get a vector here. So I forgot to close this parenthesis. This is a vector of length Q minus one here. Then you get the so-called Reed-Solomon codes. I must mention this, that Reed-Solomon codes were used in deep space communication, in particular Voyager 1 used Reed-Solomon codes for correcting communication errors. And at the end, I will show some pictures that Voyager sent. And it uses this Reed-Solomon codes to correct errors during information transfer. All right. So now we move to our next metric. So far we talked about Heming Metrics. Now I would like to tell you about the Poset Metrics. This is a silent revolution in my opinion. This business started around the early 90s. But in 1995, it was made precise by Royal D. Gravers and Lawrence. So let me just get down to the theory instead of forcing around. So to define a Poset metric, I fix a Poset. And then given a vector in my vector space, I would like to define the weight of that vector with respect to the Poset. How do I do this? Let's focus on this example. This is my Poset given by its Haset diagram. And then I receive this codeword C. I look at the non-zero entries of the codeword. They occur at the fifth and the sixth positions. And I look at my Poset and pick the fifth and sixth vertices. And then I take every element in the Poset that are equal to or less than these two guys. So this is the old red marked region. Okay. So this is the support of this codeword in this Poset. Because this is my non-zero coordinates are fifth and sixth coordinates. Now, this is an order ideal, meaning that if I take anything here, anything below will be in the same set. So support, support set is essentially an order ideal in my Poset. Then I take its size as my weight. So weight of my vector C is simply the size of the order ideal generated by the indices of the non-zero entries of my vector in the Poset. This is quite related to Hemingweight. In the sense that if I want to capture, recover Heming distance by a Poset, all I need to do is to take an entire chain. The Poset, where no two elements are comparable, then the ordinary, then the Poset support agrees with the ordinary support of a vector. So Heming, Heming metric weight is a spatial case of Poset metric weight. All right. So now, now I can define my Poset metric as the distance between two vectors. And then I take the Poset weight of the difference. This defines a metric. A linear code together with this new metric, determined by Poset P, will be called a P code. Here's an example. Let's look at this extended binary Heming code, H3 hat. It's the unique code with these parameters. So it is, the vectors have length eight. The vector space dimension of H3 is four. Minimum distance is four. Now you can work out the vectors in that vector space. You find all these 16 vectors. If you use Heming distance, you find that there is only one code word with x to the zero weight, so which is one. And it's zero, zero, zero, zero. So weight of this guy is zero. Now you just look at these vectors and calculate the Heming weight. You find that 14 of them has weight four. And then there is one vector of weight eight. And it is this one. All right. So this is the weight generating function of my extended binary code with respect to Heming weight. What happens if I choose other Posets? If I choose the star Poset, weight generating function is this guy. It says that there is one code word with zero weight. Again, same vector, zero, zero, zero, zero. And then there are now four code words, sorry, seven code words of weight four, seven code words of weight five and one code word of weight eight. If I use this Poset, total order, then suddenly I find that there is only one vector of weight five, two vectors of weight six, four vectors of weight seven, eight vectors of weight eight. So in this particular example, the minimum distance has increased. In 2008, Wenang Kim discovered that there is an analog of Singleton's theorem and they proved they observed this inequality. This is Singleton's theorem for Poset metrics. If you look at the minimum distance with respect to a Poset P, then it satisfies a similar inequality. And if this inequality is an equality, then we will call it an MDSP code. What's striking here is that Wenang Kim were able to show this theorem, give me any old code C with parameters n and k, I can give you a Poset P for which your old crappy code becomes an MDSP Poset with respect to the Poset metric. But this is quite remarkable because if you care about MDS codes, this is the way to go. This tells you that you can produce a lot of MDS codes using Posets. And indeed for our binary, extended binary hemming codes whose parameters 844, this is not an MDS code because remember Singleton bounces n minus k plus 1 is to be d, here d is 4, so 8 minus 4 is 4, plus 1 is 5, 5 is not 4, so this is not an MDS code with respect to hemming weight. But if I use chain C, and I know that minimum distance is 5, this becomes an MDS code. Okay, now I would like to apply this theory to quantum error correction. I'm no physicist. I'm sorry. But I'm working with these objects and they seem natural from mathematics viewpoint. So for me, the binary states 0 and 1 in classical computers can be replaced by quantum states which are simply complex column vectors. And then there's this funny notation, bracket notation, which is very helpful. I'm going to replace this unit normalized basis element by 0 bracket and then I'll denote this other column vector by one bracket. Then when I write 1, 0, 1, 1, 0, 0, 1 in bracket, I mean the tensor product of these column vectors. Okay, so what's happening here? What's happening is that I'm basically using my binary alphabet 0, 1 to encode certain basis vectors in the complex plane. And then I'm encoding their tensor products efficiently. That's how I view it. Of course, there are better physical interpretations of this notation and how it is used. In particular, if I look at n tensor product of c to the q by itself, I could take a basis for c to the q and then so c to the q can be viewed as essentially a set of all functions of finite field fq, mapping to complex numbers. So c to the q can be viewed as set of all functions from fq to c. And then for every finite field element, I can get one basis. So I can create a basis for c to the q using finite fields. And then I can take their tensor products. Okay, that's the big deal. So now I will define a quantum nk code as simply a k dimensional vector space of this huge vector space. So this is not an ordinary product. This is tensor product. So dimensions are not edit but rather multiply. So the dimension of this big tensor product vector space is q to the n. So I have a q to the n dimensional ambient vector space and I'm taking capital K dimensional subspace of it. So a priori, okay, so k is going to be some power of q or something like this. So k is going to be big also. And here, let me see it just quickly. Okay, so here I care more about error operators than error themselves. The reason is that I cannot repeat my classical methods in this setup because of the essentially tensor product. So there is a deeper reason here why I cannot use my classical coding theory business here. Essentially, it's the no cloning theorem. So I cannot simply copy information and repeat in tensor products. So there's a very basic observation called no cloning theorem. It prevents me from adding redundancy as we did in the ordinary case. So instead what we study is instead of studying codewords themselves, we study the error operators that acts on the codes. Okay, I slaughtered this business, but this is the easiest way to explain what's the mathematics behind this business. So now let's look at the error operators. So for each coordinate of this tensor product here, I have several error operators. I'm going to denote them by x's and z's. The x error operators operate by adding finite field elements to whatever indexing set I'm using, indexing vector I'm using. Okay, so x behaves like translation in the indexing data and z operator behaves like scaling in the indices, scaling with respect to appropriate root of unity. And these are my two error operators in one of the coordinates, in one of the tensor factors of my big tensor product. Here's a worked out example for my four element field. I can define these particular error operators this way. And by the way, if you get too bored, you can start multiplying these things together and you realize they actually generate a finite group. But this can be extended to n tensor products as well. So what we do is simply take tensor products of our error operators. And then we can do the same multiplication called tensor factor wise. We can multiply xu1 with zu1. But when we apply x and z's, we don't need to operate on the same indexing set. We can change our indexing sets to a and b. And we can define this and for tensor products of error operators. So we get these set products of these error operators. This is typically called nice error basis and these nice error basis generates, as I said, a finite group typically denoted by g sub n. So if you forget this scaling, it's not a group. If you add this scaling, it becomes a group. We will call gn an error group and it's a p group, meaning that every element is a power of p. It's not an abelian group, but turns out all subgroups of gn that are interest to us are abelian. All right, so stabilizer course. Now we are ready to discuss quantum codes in more detail. So let's take a finite subgroup of our error group. I will look at the fixed vectors under this finite subgroup. So I'm looking at all elements of this huge vector space that are fixed by every element of this subgroup s. The fixed set has a structure of a complex vector space. It's q and q is called a stabilizer code. By the way, it's very interesting. If s is not the full group, then it turns out that s is an abelian group and it doesn't intersect the center of our full error group. Okay, so these guys do not capture all quantum codes. They are very special in the sense that I'm looking at these particular stabilized codes, which are quantum codes, but it turns out that for any quantum code, you can find a stabilizer code that includes the captures that code at the expense of minimum distance being slightly bigger, smaller. And then the infected stabilizer code is attached to arbitrary quantum code by taking the fixed point set of the stabilizer subgroup of the original quantum code. Okay, so now, just like is in the ordinary error correction, there is a notion of a distance, so to speak, or weights of errors. And this is called the symplectic weight of the error. So take an error operator g, so this g lives in s. Its symplectic weight is going to be n minus the coordinates of g that act as identity on the transmitted message. So you can see how poset metric will come in here now. But first let me mention a quantum singleton bound. So we are going to say that a quantum code has a minimum distance d if that quantum code detects all error operators whose symplectic weight less than d, but this q cannot detect an error of symplectic weight d. This is our notion of a minimum distance for the error operators. And then if you find, if you determine d, and we will say that our quantum code has parameters and kd, or actually I should restrict my attention to stabilizer codes here, but it's okay. Now, early in the theory, development of the theory, so this quantum coding theory started by Peter Schroer in 1995 who discovered this, even though you have no cloning theorem, you can still build these things analyzing the errors. Shortly after his discovery in 95 Knud and Laflam wrote a nice article where they proved the q version of the singleton's bound. They said that if you take a nkd stabilizer code, then it will satisfy, its parameters will satisfy singleton's bound in almost. There is this two here and then instead of adding one, you add two. But if this inequality is satisfied, then your stabilizer code will be called a quantum MDS stabilizer code. Now, I would like to do this with posets. So now I define the poset weight of an error operator to be the cardinality, so okay, so I forgot to write n minus the cardinality of the ideal of p that is generated by the indices of the nanite and the tensor components of g. So that should be n minus this number. So remember the symplectic weight is n minus, okay, okay, it's correct. I'm so sorry. I'm I'm getting myself confused. There are no typos. So here, a symplectic weight is n minus the coordinates, the tensor coordinates are being identity operators, both x, xi, xaj, and zbj are zeros. Setting them zeros makes them those operators identity operators. Okay, this is a symplectic weight. This is almost hamming, hamming distance. I'm defining it in the same way. I'm taking the cardinality of the ideal that is generated by the indices of those entries where I have zeros in the support. So it's almost straightforward modification of the poset metric distance. Now, then I can define my stabilizer poset codes. The same way as the ordinary stabilizer quantum codes define. So I say that a stabilizer, so a given quantum code is a stabilizer poset p code with parameters nkd. If q detects all errors in gn of p weight less than d, but cannot detect an error of p weight d. All right, then there's a technical definition. I'm going to skip for a second. Then there is an important result due to Calderbank, Schor, Schoen, and Reins. They basically, in the binary case, they found a relationship between quantum stabilizer codes and the additive codes. So additive codes are not necessarily linear codes, but they behave like almost like linear codes. Basically, an additive code is simply a subgroup of product of a given abelian group. So what I wrote here is a version of this result of Calderbank, Reins, Schoen, and Schoen. It says the following. So if you take a stabilizer group of a stabilizer poset, with respect to a poset metric, now we can map the error group. This is done in the classical setup too. I'm just using same map. There is a map from the error group to two-and-dimensional vector space over our finite field. The image turns out to be an additive code, SAF orthogonal additive code with respect to symplectic trace product. Now, okay, so far everything is trivial. I didn't say anything new. But then what happens is that if I use, I can also define a poset metric on additive codes. It turns out that poset metric on the additive codes agrees with the error weight that I defined in the previous slide. So this sort of, this is a lemma that gets me going with this theory. Then I can prove statements that are versions of, you know, Kim, I can, I show a version of the singleton bound using poset metrics for quantum stabilizer codes. And another statement that I can prove is that if I take an MDS stabilizer P code, then the map that I mentioned previously gives me two MDS codes in the image and vice versa, in fact. So I can characterize a stabilizer MDS quantum codes by studying MDS additive P codes. And finally, I can show that this result of Yun and Kim about producing a poset for making a given code an MDS code. I can do it for the additive codes. Hence, I can show that given a stabilizer quantum code with parameters, with some minimum distance, whatever it is, I can find a poset that makes this quantum code an MDS stabilizer P code. And I think I have some other results. I put them in the appendix, but I wanted to mention one more theorem that addresses one issue that David raised in his wonderful talk last week. He was sceptic about what we can achieve using quantum, he didn't refer to error correction in his talk, but he raised the question, what can we not do with classical setup with the quantum setup? So in response to that, I thought about the following fact. The question is, to what extent we can recover stabilizer codes from classical linear codes instead of using these additive codes? It looks like there's a precise answer that builds on a result of Fuffman from 2013 who counted the FQ linear FQ to the T additive codes. And he found that if they are self-orthogonal with respect to trace alternating form, their number is given this way. So if I now insist on FQ linearity in my setup, I can count the number of FQ rational points of flag varieties, and this number is well known. Then I can compare these two summands to see how much of these self-orthogonal FQ linear additive FQ square codes come from linear codes. And it looks like the following equality holds, there is a quadratic equation polynomial. If this quadratic polynomial is a solution in this range, then for every sufficient large prime Q, almost surely every stabilizer code can be obtained from a nested pair of classical codes with parameters nk1 and k2. The difference of k1 and k2 gives this k. Otherwise, there exists a stabilizer code which cannot be obtained from linear codes. So it looks like you can do so much with linear codes, but not everything. So there are quantum codes that cannot be obtained from linear codes. And this spills on the CSS, well known CSS construction. So I think I should stop for now. Thank you. I just wanted to click on this link. I mentioned that this read, yeah, it's opening. I mentioned that these read Solomon codes were used. I don't need all these. Okay, yeah. So this is the Wiki website about Voyager 1. And then here are some pictures that it used. It took near Saturn, Jupiter, and Sandus. And I believe read Solomon codes were used for securing transmitted messages. So okay, this is the real end. Thank you. Grab a microphone. Okay. So I don't think I really understood exactly. So in the classical case, you have the received code words. Yes. And then you're using this post-set ordering to assign some kind of distance, like how far is the received bit string from a code word of the code. And then you're using this directly in your decoding, right? But in the quantum case, the post-set ordering is on the error operators. Yes. But I can't see the error operator. All I can see is the syndrome from the stabilizer measurements. Okay. So how does this post-set on the error operators inform the decoding? Yeah. So I go to a corresponding additive code word. Okay. And that lemma that I had, I couldn't explain very well, but that lemma says that the post-set weight I put on the error operators, error operators, can be translated to a post-set weight on the additive code word. Okay. But I also can't see the code words of the quantum code, right? All I have is the stabilizer measurements. Unless I want to destructively measure every cube. Correct. But this is the same as hamming. Can you do this with the hamming, the ordinary simple equation? Yeah. So this is why for when you consider a decoder for a quantum code, it will correct up to some certain hamming weight that you have to show this in the proof, right? Sure. So I don't see how just like defining... Yeah. So... Like you have examples of how defining a post-set on the error operators gives you like better decoding in some specific case? Yeah. So I just follow the proofs for the hamming case and I produce some versions of it. Sorry. So to... Let's see. Did I put these proofs? Yeah. I can explain this maybe once after this talk is over. We can sit down and I can try to find it. But the way that I proved, literally I took the quantum post-set paper, especially the one Ketkar Kleppenner paper. I looked at their proofs. And if I make these post-set definitions, I could modify their arguments appropriately. Sometimes I had to come up with new argument, but it just worked. Okay. So if they're... I mean, at least I can logically say that whatever I did is correct, but I'm not sure if I'm answering your question. Okay. Okay. Maybe... Yeah. So if what these people are doing is correct, mine is also correct. I'm doing something, but I'm not sure if I can address... Yeah. We have to maybe talk. Yeah. Sure. Okay. Thanks. Sure. Yes. So I have a question about the cell limb stuff. So you have Raji D and also the DL-A or something? Yes. But I couldn't get the definition of the DL-A, which is not just right. DL-4? Yeah. So this is alternating trace form. Yeah. Right here. Yeah. So that perp is the orthogonal, the dual of the code, which is picked to this form. So it's specific. Yeah. So this form exists on fq square to the n. And then I take a code d and then I take its dual with respect to this inner product. And I denote it by perp A. Okay. Yeah. It makes it... Thank you very much. Sure. I don't know if it's forwards or backwards now, but can you go to that last slide you had before your appendices? Before the appendices, sorry. Oh, before the appendices. Sure. MDS, your results about these MDS active. The way you phrase them is just there exists. Are your proofs constructive? Can you construct? Yes. Yes. I can construct. And there are many posits, in fact, not just one. There are many posits that makes a given code an MDS code. Then you can ask which one of those MDS codes are better than the others. So there are new hierarchies showing up now. Any other questions or any questions online? If not, then let's thank my hair once more. Thank you. And I'll be a TSVP.