 devoted to, in fact, what I want to do is to show you how this kind of tool, lighted theta series, can be very fruitful for analyzing many communication systems and also designing some other ones as we will see. So, first topic that we will consider is the use of these theta functions. In fact, I will use the terms theta series and theta functions depending on what we are doing with. And, okay, I will show you the small difference, but we can consider these two objects as being the same, roughly speaking. So, first of all, these can be used to analyze performance of communication systems, in coding, shaping, secrecy, that means, for example, I mean, that means the physical layer security. And it can be very interesting in order to design some very good coding scheme or shaping, let's say, shaping scheme as well, et cetera. Okay, and then second part. So, I will not unveil too much of this because we are working on this area as well in one way, so it's something which is a little bit under secret, let's say, right now. But I will show you how theta functions can be also used as a waveforms in order to transmit information of a nonlinear integrable channels, as for example, the fiber optical communication channel. It can be. Okay, let's start with the analysis part. So, the analysis part is more related to coding, right? Okay, so let's start with a very simple function, which is the Jacobi theta function, and then we will see how we can construct all the family, I mean, the whole family of theta functions that we're interested in, okay? Okay, so we have here this function theta, depending on two variables, z and tau, and okay, it is exactly this series, okay? And I will do something, sorry, sometimes I will use the tau variable, in this, which is this one, yeah? And sometimes I will use the q variable instead of the tau variable, and the relationship between these two variables, q and tau, is given by this exponential of i pi tau, and the q variable is just the development in Fourier series of this function. And then the other variable is z, and which, in this case, is a scalar, but more generally, as we will see, it can be a vector. Okay, and so this is what we can call the elliptic variable, and the other one is the modular variable, okay? And in fact, what we will see is that in many cases, all the theta series that we will consider can be constructed from just this function, so just one function. So if we are able to make a table of this function, which can be sufficiently good, then we can derive all other theta functions of interest for us, let's say, of course, not all of them, but all theta functions of interest as a function of this single theta function, Jacobi theta function, okay? So the first connection that we can make is through the sum of Gaussian measures, which will be very important, for example, in physical layer security, but also in some other topics related to mainly multi-user communication using lattice. And so this is this famous sum of Gaussian measures, so it's just a sum of Gaussians, which are in fact shifted by K, which is an integer. So just develop this exponential, and then we get this relation between the sum of Gaussian measures and the Jacobi theta function, okay? So here is an example in two dimensions, because here it was in one dimension. That means that if X, instead of being a scalar, is a vector, a two-dimensional vector, we have this, okay? This is a sum of Gaussian measures when the sigma square equals 0.3, and then you can see when sigma square equals 0.6, it tends to be something almost flat. And of course, if the variance increases more, then it tends to something uniform, okay? Okay, then let's go to the theta series of lattice and let's start with something very simple first. So what is a lattice in general? So let's consider what we call an n-dimensional lattice. We consider full-ranked lattices. That means lattices that are, in fact, that are, let's say that are in the whole space, not just that live in just a subspace, okay? So it's just a discrete additive subgroup of Rn, yeah? And it can be defined, in fact, by a basis, which means it's n linearly independent vectors, v1, v2, vn, and we can define from this basis a generator matrix, so just the columns of g will be these vectors, right? And the lattice can be defined as g times any integer-valued vector in dimension n, okay? So now let's go to the theta series of the lattice lambda and we have many, many parameters of interest that can characterize a lattice. And one of them is this theta series. So we will distinguish the one variable and the two variable, okay? So the one variable theta series of lambda is just, in fact, the generator function of the Euclidean norms of the points x in the lattice lambda, okay? So as you see, I will use tau here for the variable, but so we have still the same relation between tau and q, which is q equals exponential i pi tau, okay? And then the two variable theta series, so we add now the vector-valued, the vector, sorry, z, the vector variable z, and we have this definition of the two variable theta series of lambda, which is just, so we just add this exponential two i pi times the inner product between z and x, and x is inside the lattice lambda, okay? So, and in fact, now in what we'll follow, we will only consider the one variable theta series. At the end, when we'll consider the transmission of a nonlinear channel, we will need the two variable theta series, okay? But I will remind it later. Okay, so in fact, we can see the theta series of z, I mean the one dimensional, the one dimensional series I mean the one dimensional, sorry, the one variable theta series of z, is just the Jacobi theta function evaluated at point z equals zero, okay? And so it is also called theta three of tau, okay? Because you see, you have just, if you take z equals zero, so this term disappears, and you just have, okay, q to the power k two and k varies in z, okay? So it is the theta series of z. So it is, so the first ingredient that we will need to construct all the other theta series that we will need afterwards. The second ingredient is the theta series of z plus one-half. So what is z plus one-half? Just add one-half to all integer in z, so, and you will have not a lattice, but a shifted lattice and what translated lattice. And then thanks to the Jacobi theta function we can express the one dimensional theta series of z plus one-half in this way. And it is called, generally this theta series is called theta two of tau, all right? So we have now two theta series, theta two and theta three. All the, I mean both these theta series are coming, in fact are deduced from the Jacobi theta function, all right? So this is for example, this is a very well-known example of a two-dimensional lattice. It is the A2 lattice or hexagonal lattice. So it is named hexagonal because, okay, so these are lattice points. This is a basis of the lattice. So one zero and one-half square root of three over two, which is the second vector. What we call the fundamental parallel top will be this kind of body, right? And then the Vorino region is the set of points which are closer to some given lattice point than to some other one. And in this case, it's a regular hexagon, okay? So this is the generator matrix of these A2 lattice coming from this basis because you can have, of course, infinitely many possible generator matrices for given lattice. And a theta series for A2, the one variable, can be computed by using only theta two and theta three. I will show you then why it is given by this expression. And if we develop it, I mean, if you make a Q series development of this theta series, then we obtain this, which means that we have one point of norm zero, which is the zero point, of course, then six point of norm one. For example, if this is the zero point, then, oops, sorry, I can't see very well. So this, this, this, this, this are the six points of norm one, et cetera. Six points of norm three, six points of norm four, et cetera. This is what it means, okay? And of course, it's an infinite series, okay? Now, we have seen some generalities. Let's go to the multidimensional case before going to the applications section, okay? So, first thing we will see is how to compute the theta series now for a multidimensional lattice because we need it just as a function of the Jacobi theta function, okay? Okay, so we need some basic rules to do that. In fact, it's like a lego, yeah? You just put, I mean, some elements on top of each other and then you can obtain what you need. So, we need just these three rules to construct our theta series. First of all, what is the theta series of the lattice, which is the direct sum of two lattices, okay? So, suppose that we know the theta series of lambda one and the theta series of lambda two and we have lambda, which is the direct sum of lambda one and lambda two, what is the theta series of this one? It's simply the product of the two theta series of theta series of lambda one by the theta series of lambda two, so this is very simple. Then, what is, we need another rule. It is the computation of the theta series of a scale lattice. So, if you multiply all points of lambda by some positive number alpha, what would be the theta series of alpha lambda? It is simply the theta series of lambda evaluated at point alpha squared tau. It's very simple. You can see it just from the expression of the theta series. And finally, from a very important, is the union of cosets. Suppose that, so lambda lattice is a group, yeah? So, you can define cosets. It's cosets coming from groups, from negative groups, yeah? So, what is the union of two cosets, C one, and suppose that lambda is the union of two cosets, C one and C two. And then, simply the theta series of lambda will be the theta series of the coset C one plus the theta series of the coset C two, yeah? So, let's see on one example what we have. A two, so the hexagonal lattice, we can, in fact, show that it is the union of two cosets. It is, in fact, the direct sum of Z and square root of three times Z. And you make the union of this lattice, because this is the lattice, with another coset, which is Z plus one-half, direct sum, square root of three, Z plus one-half, okay? And it will explain why we got, you know, this formula for this theta series, as we'll see. Just use these three rules. We have here some scaled version. We have a direct sum and we have union of cosets. So, we have everything. We can compute the theta series of A two, right? Same for another example, in the eighth dimension, it is E eight, the eighth lattice, the gosset lattice. This is the densest lattice in dimension eight. And it has been proven very, very recently, maybe some months ago, that this lattice was, in fact, could make, sorry, the best packing, sphere packing in eight dimension. Not, we knew that it was the best lattice packing, but we know now that it is the best packing in general, yeah? So, okay. So, how to evaluate, sorry. So, you understand now why from this formula for A two, we can obtain this, because it is, we have here the product of two theta series, corresponding to Z and to square root of three times Z. And then we have here the product of two theta series corresponding on to Z plus one half and square root of three times Z plus one half. And then the union of two coset, it means the sum of the two theta series, okay? So, everything is very simple here. A little bit more complex, it is this construction, okay? It is what we call binary construction A. So, how does it work? So, in general, binary construction A is this kind of construction, okay? You take, in fact, a binary code of lengths, a binary linear code, sorry, of lengths n and dimension k, all right? So, and in fact, you consider one coset as being any even valued vector, n dimensional vector, right? So, it is to Zn. And you add to all these points one given code word from coming from this binary code. So, it is only one zeros. The values of this code words are just, the components are just ones and zeros, okay? And then you make the union over the entire code C of all these cosets, all right? And you obtain a new lattice, lambda. If C is linear, then it's very easy to prove that this union of cosets is a lattice, okay? So, now, let's go on. We know that two Z as theta series, theta three of four tau, why? Because theta three of tau is the theta series of Z and the two Z is just a scaled version of Z. And so we know by the second rule that the theta series of two Z is theta three of four tau, okay? Now, two Z plus one, because we will have something like two Z plus one coming from one component here, if you have one component of C which equals one, then of course the component of this, this component of this vector will be of the form two Z plus one, okay? So, two Z plus one is just two times Z plus one half. And of course, theta series of Z plus one half is theta two, okay? And theta series of two times Z plus one half is theta two of four tau, okay? So, now, by using the first rule, we know that two Z n as theta series, theta three of four tau to the power n. And finally, the coset two Z n plus C will have this theta series, which is the product of theta two to the power, the Hamming weight of C because the Hamming weight of C is the number of ones in the code word C and that's why in this case it will be, so theta two corresponds to a value of the component equals to one and theta series correspond to a value of the component equals to zero. So, you will have theta two to the power of wh of C, so the Hamming weight of C, times theta three to the power, the number of zero of the code word which is n minus the Hamming weight of C, okay? And so, to obtain the theta series of lambda, it's a union of all these cosets, so just add all these terms and we will obtain so the Hamming weight and emulator of the code, okay? So, the Hamming weight and emulator of binary code is just, it's a, sorry, it's a two variable polynomial and the power that you can find on X will be the number of zeros of a code word, of a given code word and the power of Y is the number of ones, so the Hamming weight of a given code word. So, just replace X by theta three of four tau and one by theta two of four tau and you obtain the theta series of the lattice lambda that we got by using this construction A, okay? So, let's go back to our example E eight, this famous lattice, okay? So, I recall you everything concerning theta three, theta two, okay? And so, this construction A, so now, of course, we get that by using the third rule, we get that the theta series of lambda is just the sum of all cosets, so two Z eight plus X of the theta series of this coset. This is, now, it is something that we saw on the former slide, okay? And we know that to construct, in fact, to construct E eight, we need this code which is the extended Hamming code of lengths eight and dimension four and the Hamming weight in the numerator of this code is this one, which means that we have a code word of weight zero coming from X to the power eight, so it is the zero code word, the whole zero code word. The code word of weight eight, which is the whole one code word, and finally, 14 code words of weight four, okay? And so, just replace X and Y by theta three or four tau and theta two or four tau, respectively, and you get these theta series and these are the first terms in the Q series enumeration, okay? The Q series development, sorry. And so, and it's possible to generalize it to a very large family of lattices, so we can compute efficiently theta series of lattices in most interesting cases, okay? In fact, this lattice is not exactly E eight, but it is a scaled version of E eight. It is square root of two E eight. And so, to have the theta series of E eight, just divide by two all exponents here and you will obtain so this theta series, all right? So, we have now, I mean, by using this trick, let's say, we have now a very nice way of computing theta series of, when for example, our lattice is obtained by using construction A, binary construction A. Of course, there are many, many other constructions and especially when the dimension of lattice increases, then binary of construction A is no more enough and in this case, we need to consider some other kinds of construction, but for which we have the same types of tools. So, I will not go, of course, deep into this direction because it's something which is very long to consider. Okay, another way of getting the theta series of E eight is in fact, in fact, what we have is that we can use, in fact, the theory of modular forms because the theta series of E eight is a very specific modular form and it is what we call, in fact, a modular form for the full modular group, SL two of these, so the group of matrices of two by two matrices, integer value matrices with determinant equals one, and so what we know is that the theta series of E eight is just the Eisenstein series E four and which is given in this way where sigma three of M is the sum of the cubes of the divisors of M and so we have another way of computing the theta series of E eight, okay? And we can check that we have exactly the same coefficients here for Q two, Q four, Q six exactly, et cetera, sorry. Yeah, okay, so now let's leave the constructions and the computations of theta series and let's go to the applications of these, okay? So let's start by maybe the application which is, which really needs a lot these theta series developments. It is to start with the Gaussian white hat channel. So suppose that you have Alice here which wants to transmit to Bob through a Gaussian channel, so a channel with additive Gaussian noise and we suppose that the variance of the noise here is N zero and there is an eavesdropper if, which has, which can eavesdrop the message I mean, let's say the signal transmitted by Alice and through another Gaussian channel with variance N one, then we know that by, I mean, information theory says that the secrecy capacity, that means the amount of information that Alice can transmit reliably, transmit to Bob, so that Eve, in fact, will have, I mean, no idea of what would be this information is in fact this difference between the capacities of the channel between Alice and Bob and the channel between Alice and Eve and of course this capacity, secrecy capacity will be non-zero if N zero is smaller than N one, okay? So if we have a channel advantage, if Bob has a channel advantage, compared to Eve and we know that by using lattice coset coding then this capacity can be achieved up to some gaps to some small gap of one half bit but I guess that this small gap is just a technical gap coming from some upper bound which is not very tight. Okay, so what is lattice coset coding? So let's say that we have, suppose that we have a lattice lambda b which is good for coding. This lattice maybe can be shifted that means just translated by some given constant vector. And we have a sub lattice of lambda b which is lambda e which is good for secrecy. We'll see what it means, good for secrecy. And in fact as you can see, then this is b and e means that lambda b will be the lattice for Bob, let's say and lambda e the lattice for Eve. So this is the confusing lattice, lambda e and lambda b is the good lattice for Bob, okay? Here is an example to dimension, okay? So we have, so all these points are lattice points from lambda b from what we call also the fine lattice. And for the sub lattice in fact we have, in fact what we have is not exactly a sub lattice of lambda b but a sub lattice of lambda b shifted. So either you consider that lambda b is shifted or lambda e is shifted but one of the two lattices here are shifted and the sub lattice is two z two while the lattice is z two, okay? So that means that if we quotient lambda b by lambda e we get quotient group with four elements because this is the index of lambda e related to lambda b and these four elements are just these ones that means star, circle, triangle and square, okay? So for example these squares are, these big squares are volumetric regions of lambda e and that means that in this case, you know, you can represent one given point of lambda b in this way we say that it is one point of lambda e and in this case it is just, we have just to say that it is one of those squares, right? Plus one point of lambda b over lambda e that means now we have to specify inside one of these squares if we have a star, a circle, a triangle or a square, okay? And in fact, the idea is to encode the data inside this quotient group, okay? So if we have, for example, we have here two bits that we want to encode and so if we want, sorry, for example, zero, zero will be encoded into a star, zero one into a circle, one zero into a triangle and one one into a small square, all right? So, and now what Eve can see? Eve will see some, I mean what has been transmitted plus some noise, but the problem is that because the data have been encoded inside this quotient group, all right? Then the problem is that what Eve will see if she's interested, of course, in decoding the data will be not the Gaussian noise, but let's say some folded Gaussian noise so with PDF, a sum of Gaussian measures and in fact what happens is that if the variance of the initial noise is not too big, then she can still know which point, I mean she can still decode in some way, but if the variance then becomes too big, then what she will see will be something almost flat and because it's almost flat, it means that she will have no way to decode, in fact, the information encoded in this quotient group, all right? And it can be illustrated in this way. For example, suppose that Eve, sorry, that Alice sense this point and suppose that all these points, yes, which are surrounded by pink areas, correspond to the same coset, all right? It means that all these points correspond to the same information, secret information, all right? And so now what we want to do is that because Eve sees a variable distributed as a sum of Gaussian measures and we want that Eve, I mean, that information leakage that comes to Eve is as close to zero as possible, then it means that Eve has to see something, I mean this kind of distribution that has to be as close as possible to the uniform law, okay? So it means that we have to define a distance between this distribution and the uniform distribution, yes? And it is what has been done. So here we have this sum of Gaussian measures, F lambda E, so it is related to the lattice lambda E and sigma and define what we call the flatness factor of lambda E, which is in fact, sorry, the infinite distance between this F of lambda E sigma of X and something uniform on the volumetric region of lambda E, so it means that it has pdf equals to one of the volume of lambda E. So the volume is just the determinant of, sorry, the absolute value of the determinant of the generator matrix, so which is also the volume of the Voronoi region of the lattice. And then it can be proved that this infinite distance is just, can be just computed in this way, so it is related to the theta series of lambda E, all right? So it's very important now to know the theta series of lambda E because if you want to have an information leakage coming to EVE, which is as close as possible to zero, then we need to minimize this theta series among all lattices, we need to minimize it, okay? And so now we can show that the mutual information between the message transmitted by EVE, sorry, Alice, and what EVE sees can be up about by this value, which is in fact, an increasing function of epsilon lambda E, which is in fact, this flatness factor, yes, and so we have seen, we have shown that if this volume to noise ratio, so this gamma is larger than one, then there exists a family of lattices, lambda E, index, sorry, index by the dimension n such that this epsilon goes to zero when n goes to infinity, and this is in that way, this was the first step to prove that we had a family of lattices that could achieve the secrecy capacity and in the white-type Gaussian channel, okay? It is just something, it is very important to have the full, really the full theta series of the lattice and not only some terms, why? Look at this lattice, gamma 72, it's a 72-dimensional lattice, discovered recently. This lattice, in fact, is very special. It's the densest we know in dimension 72, and this is, it's a theta series, at least the first terms, so you see the number of points of Euclidean norm 8, of Euclidean weight 8 is very big and then of course these numbers increase a lot when the weights, we want to consider are bigger and bigger. So this is the flatness factor when you compute the rule theta series, okay? And of course when the SNR or the generalized SNR goes, I mean, becomes small, then this flatness factor goes to zero and quite quickly in this case because we have dimension 72, all right? And it is normal, it means that the, I mean, the distance between our distribution, the distribution seen by Eve and the uniform one goes to zero, okay? And this is the same flatness factor but computed by using just terms up toward the 20 and as you can see, then it has no meaning because the flatness factor is distance, so it can be negative, of course. So that's why it's very important to be able to compute the whole theta series and not only the first times, okay? Another example of the use of theta series just consider regular coding, lattice coding on a narrative white Gaussian nose channel. In this case, the error probability, yes, can be up about in this way, so still again we have the theta series of the lattice that comes to us again, okay? Another, let's say, application of theta series is what we call probabilistic shaping, okay? What we have even in a regular communication problem, what we have is that we can transmit a finite subset of a lattice and in general what we do is that we use the finite subset of a lattice with small Euclidean norm, that means with small energy or small power, right? And so there is another way of doing that. It is to use the full lattice but to weight each point of the lattice by this probability, okay? So it means that it is just a Maxwell-Boltzmann PDF, I mean for these lattice points. So it's just or discrete Gaussian probability, all right? So the problem is now what can we, how, sorry, can we define the power and the rate of such a signal set because these are the two most important things we have to know, parameters we have to know in order to transmit. So the power by dimension is just given by this. So it is the sum over all points of the lattice. In fact, the Euclidean norm, the square Euclidean norm of the point times its probability, which is given by this weight. And in fact, we can prove that it is just equal to the logarithmic derivative of the theta series of the lattice lambda. So it evaluated at point tau equals i divided by two pi sigma square, okay? So it is, so we have a very direct relation between the power of this constellation we are using and the theta series of the lattice. Now the rate, so how to define the rate of such a constellation, so just the entropy, all right? So of course we normalize by the dimension to have the rate per dimension. And so it is this, it's a definition of course and we can show in the same way that it is given by P over two sigma squares, so this P, plus one over N times the log of the theta series, okay? So still again we have a very, I mean, we have a very direct characterization of power and rate in this case if we want to use probabilistic shaping by using the theta series of the lattice. So it's a very, very important tool, even in this case. Okay, so there are many, many other ways of using theta series, but of course I will stop for the first part here, okay? And now let's go to the second part. So as I told you I will not unveil too many things for this second part, it is related to waveform design. And in this case we can show also that lattice theta series can be very important, but in this case we will be, I mean we will have to use the two variable, theta series is not the one variable as it was the case for analysis, okay? So first of all I just remind you some general points concerning communication of a nonlinear integrable channel, okay? And the one we have to consider in Huawei and in many companies, telecommunication companies is the nonlinear optical channel. Maybe in some years we have to consider some other more fancy channels, but for now it is the only one of interest for us. And so as you can see, this is, for example, what has been done up to now, so these are quite old results. The newest one comes from 2012, but they are some better, I mean some better results now of course, but okay. So this is optical communication, so we have the SNR here and we have the spectral efficiency we can achieve here. So this is the so-called channel limits which is the log of one plus SNR, let's say. If we were in a linear channel, right? But it's not the case. We are in a nonlinear channel and in this case, so if you consider the nonlinearities as, I mean the result of these nonlinearities just as noise, so this is what you have. So you will have a maximum here for the rate and then the nonlinearities will become too important and then it will be a mess and of course nothing will work correctly. So in the single mode fiber, light propagation is governed by this nonlinear Schrodinger equation. So it's an approximation, but it's a good, very good approximation for what we have to do. And so the commercial communication systems up to now operate in the quasi-linear regime and they consider the nonlinear effects as noise. It is just what I told you. But there is another way to solve this paradigm. It is in fact, so something that has been proposed very recently to consider nonlinear free transform in order to, not only to solve the nonlinear equation and in this case, the equation has to be integrable which is the case for the nonlinear Schrodinger equation. So I will not go into details for this integrable equation setting because I don't know too much about it, Nira. So that's, and so, but so in this case, if we use the nonlinear free transform, we will be able not only to solve this nonlinear equation but also we'll be able to design some communication system in the same way as it has been done for many years in wireless communications by using OFDM, all right? So we'll have a kind of nonlinear OFDM system, all right? So why? Because in the nonlinear free spectrum, what happens is that we have a nonlinear, sorry, we have a superposition principle. That means that we can add components which is not the case in the time domain, of course. And because the equation is nonlinear, okay? So just what we can do is just use the nonlinear free transform, superpose all signals we want to transmit and then go back to the time domain, okay? So this is an example of a nonlinear free transform for rectangular pulse. So we have two components for the nonlinear free transform. We have the continuous part, which is more or less equivalent to what we have by using the linear free transform. And which is, and it is continuous. That means that it can be defined on all real frequencies. And then we have a discrete part that corresponds to soliton solutions. And this discrete part appears when the nonlinear regime becomes, let's say, important, okay? For example, here we have, so the rectangular pulse and capital A is the amplitude of the rectangular pulse. And so if A, capital A is not too big, then we have the famous sync function in the continuous part because it's, if A is small, that is almost the regular free transform. And we have nothing in the discrete spectrum. Now, if A becomes bigger, then we have, of course, something here in the continuous spectrum, which becomes different from the sync function, yeah? And we have a component that appears in the discrete spectrum part, okay? Then if A becomes even bigger, then as you can see, even this peak almost disappears, the central peak. And then we have two components in the discrete spectrum which becomes bigger, et cetera, et cetera, okay? So, all right. So what we have behind the nonlinear free transform? So, first of all, the nonlinear free transform depends on the nonlinear equation. If we change the nonlinear equation, we have to change the nonlinear free transform. It's not the same one, okay? Then, as I told you, the nonlinear free transform, so the nonlinear spectrum contains a continuous part and the discrete part, okay? The nonlinear free transform diagonalize the nonlinear channel, okay? In which way, in fact, we can, by using the nonlinear free transform, we can write, in fact, the channel becomes diagonal in the same sense as when you consider OFDM in a linear channel, okay? Exactly in the same sense. But maybe, in this case, the channel gains, instead of being scalar, can be matrices. So, it's a little bit different, but it's the same kind of phenomenon that appears. So, we have a superposition principle, the nonlinear free domain. We can add the components, which is not the case, of course, in the time domain. And it has been extensively investigated in the context of water waves because we can get the same equation, okay? So, we have some waves, some ocean waves that, in fact, are governed by exactly the same equation as the propagation of light inside the fiber, which is the nonlinear Schrodinger equation. Just exchange the space and time component and you have exactly the same kind of equation, okay? Now, if, okay, I will not tend to detail this here, but if you consider, because in order to construct these solutions based on nonlinear free transform, we had to consider some boundaries, which were, in fact, the boundaries, the minus and plus infinity boundaries. But now, if you want to consider something which is, let's say, easier to implement, I mean, which is implementable, it's better to consider something equivalent to Fourier series instead of Fourier transform because everything is done digitally in our area, of course. And in this case, we have some thing equivalent here which are these generalized Fourier series and what are these generalized Fourier series? These are, in fact, the Riemann theta functions. So, still again, theta functions appearing, okay? So, just to finish, it's very short now. So, Riemann theta functions as waveforms. So, not only to analyze, but just signals to transmit. Okay, so these Riemann theta functions, so any finite bound solution, that means that, okay, in this case of, if you consider these, let's say, generalized Fourier series, then we only have a discrete spectrum components and it means that if we consider, I mean, finite bound solution is just equivalent to finite bandwidths, okay? So, they can be given in terms of Riemann theta functions which are these series, okay? So, we have gamma, which is a symmetric matrix with positive definite imaginary part so that this series can converge and then A is a vector in ZD, all right? And as you can see, if, in our case, if we choose gamma, okay, remove the real part of gamma, so suppose that the real part is zero. So, just consider that we have an imaginary part which is positive, definite, and then just, okay, if you consider only these ones and in this case, this gamma, because it is symmetric, is defined, it defines in fact, the gram matrix of a given lattice. Gram matrix, for those who don't know, is just, let's say, the product of the transpose of the generator matrix by the generator matrix itself, okay? So, this is the gram matrix of the lattice. And so, in this case, we can express this Riemann theta series as the two variable theta series of a given lattice, okay? And in this case, we know that the solution is given in this way. So, you have, so here it is the, so the Riemann theta function related to some gamma matrix here of this variable. So, theta minus and theta plus are just some shift vectors that depends on many things related to the propagation, et cetera, so I will not even go into details here as well, so just, it's just up to some easy term. It is just the ratio between the, let's say, if we can use some lattice here by using this trick, it's just the ratio between the two variable theta series of shifted lattice and the same lattice shifted in another way here, okay? So, still under exploration, let's say, and I will stop here, okay? Let's go to some conclusion. So, as we have seen the theta series, I mean theta series or theta functions, they are a powerful tool for analysis of lattice coding, all right? We know many results about theta series of unimodular lattices, so I didn't introduce you, what are unimodular lattices or n-modular lattices, but you have to know that these lattices are the, I mean, among the best ones, so they are the lattices of interest for us, and even when the dimension becomes very high, then we know many things about these theta series, especially we know some concentration results about theta series, so some, we know that lattices in the same genius, so it's a little bit long to define, but we have exactly the same way in terms of theta series when the dimension becomes high enough, okay? And it may provide an elegant solution to the problem of communication on the nonlinear channel, and I will stop here, my talk, my presentation. Thank you. So, are there questions concerning the talk? Very nice talk first. So, first question. So, you already did two expressions for the probability of error in the given dimension, can one reduce error of exponents in a lagal again for, from this type of presentation? Yes, there is a paper about it from Portiref in the 80s. Yes, he derived also exponents, yes. For, in general, for infinite, first for, let's say, infinite constellations in general, and then specialized to lattices, to lattices, yeah. And so, a second question is, so once you get your lattice, I mean, you can randomize the, in the alien space, you can randomize this recognition so as to make the thing as a stationary point process. Can this be leveraged to compute, to compute some of the characteristics, like mean number of points in the volume, or things like that, or variations on, yes, I know that there are some works about random lattices, but I don't know too much about it. It's, but yes, I know there are results about random lattices, yes. So, I thought that the most important thing about lattices is that it was structured, you don't want to randomize. For what we have to do, yes. You're right, yeah. Nobody, yeah, yeah, yeah. But yes, I mean, you stay inside the lattice, yeah, yeah. I mean, it's, in fact, when I say random lattice, it's the choice of the lattice, which is random. It's, maybe it's not exactly what you mean, yeah, okay. So, for your question, I don't know, I don't know. Other questions? So, I have two questions. One is related to, I would say, the engineering of the lattices. So, I mean, you get these numbers, which are 0.98, no, no, no, no, no. It turns out that when you implement things on a chip, you have resolution of 12 bits, basically. So, you'll get never those numbers. I mean, the question is how robust if we decide typically to implement lattices with the 8-bit resolution, yeah. How robust is finding such a nice number, but at the end, we just have 8 bits to implement it. Yes, it's the same problem as when you implement big constellations as, for example, 256 or more QM constellations, you have exactly the same problem, because in fact, QM is just a finite part of the Z2 lattice. And so, then when you have, of course, you can lose, but then it's, I mean, when you increase the size, and you have lattice implementation in new generation efforts, it means also that the resolutions, and the ADC resolution of the implementation should go higher and higher. Yes, but as I said, it's exactly the same problem with QM, exactly. I mean, the fact that you use lattice doesn't impact more than just regular QM constellation. You have exactly the same, because in fact, the lattices that we construct for a communication purpose, these lattices can be built from these Z2, I mean, these QM constellations. So, basically, let's say the problem of precision coming from these ADC, et cetera, will give you exactly the same impact if you use lattice codes, or if you use just QM plus bit interleaved correlation, which is the solution that is implemented right now. And the second question is about the application, so you showed the application in the optical, which is nonlinear. A lot of people are working on, I would say, molecular nanotechnologies to use molecules as a means to communicate, where you have diffusion, stuff like that. Do you think these techniques can be also used in this molecular nanotechnologies? The only thing you have to check is that the equation governing this diffusion, when you are in the nonlinear regime, of course, is integrable. If it is the case, then you can use, I mean, the same kind of tool, but you have to find first, I mean, you have to find many things. What is the Fourier transform in this case, et cetera. There is a lot of work to do, but maybe, I don't know. In general, if you want to be very precise on your model, then the equation is no more integrable. But I guess that there is always some integrable equation not too far away from the system you want to consider if you are in some nonlinear medium. There's a third thing I didn't ask, and I'll finish by that, and you have also a question. Yeah, then I'll just ask, and then I'll go. You didn't talk about the decoding aspects. Yes. No time to do that. Ha ha ha! Ha ha ha! Ha ha ha! The question is, of course, the big difficulty, except when you take off LDPC codes, but the difficulty is still, but the biggest difficulty when you code is the decoding part, where the complexity is crazy. So can you give us some hints on how difficult or the processing and the decoding end is? In fact, it's, let's say, recently a lot of progress. I mean, let's say that before, maybe a couple of years ago, we knew how to construct some low dimensional or medium dimensional lattices, and the decoding was really painful, because it was a problem. But recently, I mean, thanks to some high dimensional lattices or even medium dimensional lattices constructed by using what is called construction D. So it's a multi-level construction. We know that by just using multi-stage decoding, so the code for the first level, then remove the component coming from this, then go to the other one, et cetera. And in fact, it can be proved that you can achieve the capacity of the Gaussian channel by doing that. And I mean, even in practice, it works very well. And the complexity of decoding in this case is very reasonable. It's no more than the one you have by using, for example, this bit into lift combination. I mean, it's the same order, yeah. Okay, I think you only had a question. Yeah, generally when we use a data function, it's always because we are related to a modular phone or a tomography folder. And so we don't see if it's important for you or what you aim. So I would like to just, I'd like to know, and even for constructing a new object, the fact that there are tomographic forms where you can come to damage them, a set of tomographic forms can produce new objects. And there is a very theoretical part that can help to have new objects. And there are a lot of recent developments that we can say for integrable matrices. Yes. Okay, yes, we would be very happy to have some help from this point of view because this, I mean, first of all, I just know how to use on a software modular form, that's it. I'm not really a tomographic form, I mean. And so, but I think we could have some collaboration on that because we have some need, which I'm not fulfilled to know. And especially we can have also, I mean, for some other problems, for example, for MIMO communication, or probably, I mean, we need to understand, I mean, we need to understand the behavior of some coding we're using, coding skip, sorry, we are using. And in this case, some, for us, fancy objects appear that we cannot analyze. So for example, instead of having, let's say, a Xilaris, we can have, for example, in the MIMO case, we can have lattices defined on some non-commutative order. And in this case, instead of having this, just the, let's say the Euclidean weight, we have to consider some determinant. I mean, I don't remember exactly the expression, but the sum of determinants or to the power something I don't remember exactly. And I guess it probably, it should be related to some automorphic form, but I have no idea. So it's an object, I don't know how to manipulate it. So, yeah, maybe we could, sure. Yeah, other questions? I think we got everything, let's thank the speaker. Thank you.