 Hello, and welcome to the webinar series of the IT Journal on Future and Evolving Technologies. My name is Alessia Magniardidi from ITU, the International Telecommunication Union. ITU is the United Nations Specialized Agency for Information and Communication Technologies. ITU allocates frequencies to the services that make use of the radio communication spectrum, it develops standards and assists developing countries in setting up their information and communication infrastructure. ITU and academia share a commitment to the public interest, and this commitment is embodied by the IT Journal, which offers complete coverage of communication and networking paradigms free of charge for both readers and authors. Our journal welcomes submissions at any time on any topic within its scope. And we believe that this webinar series launched this year in March will inspire more contributions from researchers around the world. It is my pleasure to open today our 10th webinar with Professor Muriel Medard from MIT, USA. We count on your support to make this webinar an interesting experience. So please submit your questions via the Q&A channel, and we will address them to the speaker during the Q&A session. After the talk and the Q&A, please stay online. We have something special for you, the wisdom corner, live life lessons. Professor Medard agreed to a very personal chat, so she will share with us some lessons learned over the years that might perhaps be useful for some of you. It is my pleasure now to introduce the moderator of this webinar, Professor Iana Kilditz, editor-in-chief of the ITU Journal and founder and president of Truva from the United States. So Professor Kilditz is Ken Bayer, chair professor in telecommunication and emeritus at the Georgia Institute of Technologies. And in the last two decades, he established many research centers worldwide, including South Africa, Spain, Finland, and Saudi Arabia. His editor-in-chief emeritus of Impact Factor Journals, highly cited and at the top of the most prestigious international rankings. He is visiting distinguished professors in several universities around the world. And his current research interests include the 6G, 7G wireless communication systems, hologram communication, molecular communication, bio-nano things, intelligent surfaces, nano networks, and many other subjects. So it is my pleasure to give the floor to Professor Kilditz to introduce the speaker and to moderate the Q&A session. Thank you, thank you, Jan. Thank you. Good morning. Thank you, Alicia. Good morning, good afternoon, and good evening, world whites. I welcome you all to the second season and final episode or final presentation of our ITU Journal Future and Evolving Technologies webinar series. I have the immense pleasure to introduce you, one of the leading researchers in our era, Professor Muriel Medar. Muriel is an NEC professor of software science and engineering in the electrical engineering and computer science department, MIT, where she leads the network coding and reliable communications group in the research lab for electronics at MIT. And she obtained three bachelor's of science degrees in WE and CS in 1989, mathematics in 1989, and humanities in 1991, respectively, master's degree in 1991, and doctor of science degrees in 1991, 1995, all from MIT. Muriel is extremely active in all four fronts in research, in service, also internally at MIT, as well as externally in the IEEE and other research communities. She served as technical program committee chair of the ICIT, which is the flagship conference of the information theory. She did it twice, co-next, Yopt, and many other workshops. She has chaired the IEEE Medals Committee and served as a member and chair of many committees, including as inaugural chair of the Millie Dresselhaus Medal. She was also editor-in-chief of the IEEE JSEC and also served as editor and guest editor for many, many IEEE publications, and all of them are leading journals. She was also a member of the inaugural steering committee of transactions of network science and IEEE journal of selected areas in information theory. She is currently as the chief of the IEEE transactions on information theory, as well as elected president of the IEEE Information Theory Society in 2012, and served on its board of governors for almost 11 years. She received many, many awards. It's really a long list, but I should really read all of them. Why not? She received the 2013 MIT WECS Graduate Student Association Mentor Award. Then she set up the woman in the Information Theory Society, as well as Information Theory Society Mentoring Program, for which she was recognized with the 2017 Aaron Weiner Distinguished Service Award. And she also received many awards within the MIT. She was recognized as Siemens Outstanding Mentor 2004 for mentoring high school students, as well as serving on the International School of Boston as a member of Board of Trustees. Muriel is a member of the US National Academy of Engineering, elected 2020, member of the German National Academy of Sciences, Leopoldina, elected 2022, a fellow of the US National Academy of Inventors, elected 2018, American Academy of Arts and Sciences, elected 2021, and also a fellow of the IEEE, and she has many 100 degrees from Techno-University of Munich, as well as from the University of Albarc 2020 and 2022. And she received many Best Paper Awards, also IEEE Combination Computers Communications Award, and ACM Seekam Test of Time Paper Award, William Bennett Prize Award in 2009, Leon Kirchmeyer Prize Paper Award 2002, and many other, like a dozen of conference paper awards. And she has over 60 US and international patents, and many of them are licensed or acquired, and she has co-founded Code On, as well as Steinworth, for which she is currently Chief Scientist. Muriel has supervised over 40 master students, 20 doctoral students, and over 25 postdoctoral fellows. In addition to all these professional activities, Muriel is also a fantastic family person. She raised, I cannot remember now, four or five children, and she's already grandma at this young age. It's unbelievable, right? So apparently you can balance everything, family and professional life. So she's a very nice role model for that. So let me express my sincere thanks to Muriel for accepting our invitation and giving this webinar entitled Deviation from the Standard toward opening of 5G Telecom. Again, thanks a lot, Muriel. It's yours. Thank you so much for such a kind introduction, and thank you very much to the entire team for inviting me. This was a fantastic opportunity. It's a great honor. And really looking forward to having a conversation with colleagues in this field on some of the almost recent research. So the title here is Deviation from the Standard. And what I'm really hoping is, as I said, to start a conversation on a new area of research. This is all work with Ken Taffy, who's the head of the Hamilton Institute, Maynuth University in Ireland. And let me start from scratch. I mean, this is, again, things we all know, but sometimes it's good to take a step back, particularly because in this area, what we did was did a very deliberate exercise of taking a step back to reconsider what would be generally viewed as classical, well-resolved, or at least well-studied areas. So what do we do in communications? We have to correct errors. And how do we correct these errors? Well, we use error-correcting codes. So suppose I have a string of bits, like the one on the left here. So I have these eight bits. The last one is a zero. And that zero somehow is gonna flip to a one. We generally have two broad areas of considering the reconstruction of the original bits. One is what we call generally hard detection, where we have no information. Excuse me, about the reliability of bits. This is particularly the case if you have something like storage. So here we have that the last bit was flipped, but there's no indication that that bit is more unreliable than the rest. And of course, there's a lot of comms, including things such as cable comms, where this is also the case. On the other hand, we have a self-detection. In self-detection, we also have some information about the reliability of the bits. So here, for instance, maybe the bits in warmer colors are ones that are what is less certain about, the ones in cooler colors are ones that are what is more certain about. Now, in order to be able to do a reconstruction, it has to be that we add what we call redundant bits. Often I don't like the word redundant. It makes it sound like it's somehow superfluous or annoying or counterproductive. Maybe a better word would be to call them repair bits because that's really what they are. But what we'll have to do is we'll have to have some additional information to aid in the reconstruction. That's what these redundant bits are. How much of this redundant slash repair information do we need? Well, we would like to put in as little as we can while of course still maintaining a reliable reconstruction. Typically what we use as nomenclature is that we'll have K information bits. This is the actual payload. And those K information bits are extended to a length of N. The rate, which is the ratio of payload to overall bits transmitted is denoted as R and is the fraction between K and N. Excuse me. Okay. So these redundant slash repair bits, how do we construct them? And then of course, how are they matched to the reconstruction? Well, currently these are two tightly twinned processes. So encoding the construction of these redundant bits and the decoding the reconstruction based on the received signal which includes these redundant bits are those two are co-designed. And that leads to what we see right now in 5G and what we've seen of course in previous generations. We have that decoders are matched to code. Now, one code may have several decoders, but generally except with a couple of tiny, semi-special cases, one really cannot use the same decoder for a different code. So what kinds of codes do we have? Well, we have say cyclic redundancy checks which are pretty much everywhere. So omnipresent, those are generally used mostly just for error detection. Again, occasionally they can be a little bit of error correction, just a bit or so. Remula codes goes majority logic. Bosch, Schrodery, Hockenheim codes goes with birth cam master decoders, CRC aided polar codes which have been included in the 5G standard because they're so much shorter and therefore are good for transmitting information that needs to be delivered in a timely fashion such as control information. Those codes CRC aided a success of cancellation. This decoder and codes such as random linear codes which are very easy to decode of course on the erasure correction part because all it requires is to do Gaussian elimination at the receiver. Those up until now had not been for error correction associated with any decoder. Indeed, it was supposed that they were not decodable. So this has led to this proliferation of distinct hardware which is what I'm trying to represent here with this pile of chips of different decoders for different codes. Sometimes even just the same type of code but at different rates for different Rs may actually require a different piece of hardware. And this particularly has been the root of the need for standardization. So recall that our title is deviation from the standards going away from standards. Why do we have standards, I mentioned CA polar codes but for instance, LDPCs which have been around since the early 60s. They were actually the doctoral thesis of my doctoral advisor, Bob Gallagher. Those were chosen for the data channels. Why do we need standardization? And by the way, the requirement for standardization as this audience knows well, really comes with a lot of cost, a lot of cost in terms of inefficiencies but also a lot of cost in terms of delays in terms of very contentious discussions regarding these choices. So let's again, take a step back and see what are we really doing when we're doing this reconstruction? So when we talk about coding on an academic level, we usually mean really two types of, two different types of actions. Two different types of processing of data which a priori would seem to be entirely separate from each other. One is source coding, also called compression that would be like zipping, using ZGZip. You have data, see here maybe you have four blocks of a hundred bits each. And from that data, you're gonna create a more succinct representation hence the name compression. So maybe here I go from 400 bits to two blocks of 200 bits. We then have a different kind of coding which actually rather than compress the data, expands the data. So here I go from 200 bits to say 300 bits. So in this example, my R, my rate is two thirds. And the reason I expand is because I'm gonna be transmitting on a channel. And as we saw at the beginning of our presentation, some of the bits might get flipped in the same way that we saw that last bit get flipped. There might be some soft information again as we saw in our first slide. And this reconstruction, the channel decoding is going to be really what we'll talk about today. After channel decoding, there's a source decoding and for instance, unzipping a file and we get back to the original data. There's always a probability of error, some loss possibility as we go through this cycle but by and large the design is so that there is as little loss as possible. Okay, so let's first examine the first aspect of the coding which is the source coding or compression. So how was I able to go from 400 to 200 bits? We have H of S, the entropy of the source S. So what this means is that somehow I was able for every two bits to come up with a representation which only required one bit. It means that the entropy was somehow below a half that the rate of compressibility was a half or lower. That's how we were able to rewrite our data in a more compressed form. Now, when we went to the expansion in order to transmit, we would like that expansion, that rate in effect, the expansion to be as small as possible or equivalently the rate R to be as large as possible while still allowing the channel decoding to be with high probability, a accurate reconstruction of what was transmitted. Okay, so the channel transmission we usually denote by X because we're communications engineers which is booming with imaginations, sorry, call it X and the receiver sees something which we call Y. The effect of the channel, we generally represent as an additive effect often denoted by N. And really what we have is it doesn't matter that it's additive or not, it just means that it's invertible, okay? And it doesn't matter what the actual noise is to some extent really all we care about is the effect of the noise. And this will be important later on when we look at self-information, we don't really care about the noise itself, we just care about the effect. And the idea that it's invertible means that if a genie came and told me what the noise effect was, I could invert that noise effect and on Y and recover the X. Okay, so let's look at this noise effect. What if the noise was itself a source? Well, the most polite, unobtrusive, considerate noise that you could ever hope for would be a noise that would actually compress itself and place itself obligingly at the end of our transmission so it's not to bother anyone. The real problem with the noise, of course, is that that's not what occurs. The noise just happens wherever it happens. It's generally well-represented in a stochastic manner. And therefore that noise, it just happens wherever it happens. We don't know where it happens. If we knew that it always just is represented at the end, we wouldn't have any problem around channel coding. What we would do is we just send the bits and leave a little space at the end. So we'd send our k bits and we would leave n minus k bits at the end to accommodate the noise. In that case, as long as n minus k exceeded the H of n, which is the entropy of the noise times n, then we would basically have left enough space for the compressed version of the noise to coexist with the transmitted signal without any interference between the two. But unfortunately, of course, that's not what the noise does. So why is r less than one minus h n? Well, it is really because the best that I could ever do would be for the noise to compress itself and just to go to the end. So if you think of this as just a counting argument, we went from two to the 400 possibilities in theory to say we really only have two the 200 possibilities for actual data. That's why we were able to represent it by strings of length 200. We then took those lengths, the strings of length 200, we mapped them to lengths 300 bit strings. And in terms of the noise, if we were to compress it, we have with high probability about to the end h n possibilities for those noise strings. The outputs are to the end possible strings in order to be able to recover the data. It means that I have enough possibilities to cover both the noise strings and the original data strings, even though the data has been mapped to like 300 strings, they're only to the 200 possibilities. So I need that the total number of possibilities, which is two to the nr times just from, simple combinatorics to the n h n for all the possibilities with high probability of the strings. We need that that be less than two to the end, which is the total number of possibilities for y. And that's where my r minus h 1 h n r being less than one minus h n comes in. That right hand side, that one minus h n, many of us will recognize as being capacity. That's the capacity of a binary symmetric channel where h n is the effect of the noise. So this is really where capacity comes from. Okay, so do we know how to code for these systems? We can tell that we cannot get to more than one minus a 10 from basic pigeonhole principles or just from thinking of placing, having the best channel, which would be a channel that compresses noise and places it at the end of the transmission in a predictable fashion. Okay, well, Shannon in 1948 did tell us that actually this one minus h n is possible. And this is how he suggested at a high level that we proceed. He said, okay, remember that you have, say these length 300 strings to transmit, those are my X's, so there was two to the 300 possible X strings to transmit, but you only had a length 200 original data to transmit so you only have length two to the 200 or equivalent length to the nr, where n is 300 in the case that we're talking about and r is two thirds, therefore nr is 200. Okay, and this is what he suggested. Make a back. In this bag, put all the length 300 strings and make a very long shelf and on that shelf, put all your two to the 200 possible transmissions, messages to transmit data. Pick the first one from the shelf, go in the bag, pick out random uniformly the first length 300 string, match those two together, record your matching, then put aside the first length 200 message, put back in the bag your length 300 string, take the second message, repeat the operation, so with replacement independently match, record, and then this to the 200 times. I, once you have finished doing that, you will give you interlocutor that record with two to the 200 entries. You will then transmit to your receiver. The receiver will receive y as we saw before. The receiver will then consult that huge dictionary to the 200 entries and observe which of those two to the 200 entries is the most likely to explain the y that they have seen, okay? So it basically is gonna do a to the 200 search and find the best explanation for the observed y. That's the maximum likelihood. Look, you can see that the maximum likelihood is the same as the maximum posteriori here. The reason is that all this length to the 200, this to the 200 length, 200 strings are all basically roughly equi-probable after the compression process has taken place. Okay, so we know what's the best to do, right? Or what's the best we could hope for is one message. And we know how to do it, you know, we know this since after World War II. So why don't we not doing this? There are two reasons for it. First was the storage that recording that I mentioned that actually is very onerous. Second was the complexity at the receiver in having to check that massive book with two to the 200 entries in order to find the best explanation, the most likely explanation, which is what we by the best most likely explanation for the y that was observed, okay? So this is why this is not done. Now the first difficulty has been obviated since the late 60s, early 70s by again, Bob Gallagher. He said, you don't need to do that whole recording. Instead, what you can do is you can take a matrix of size K by N chosen over a finite field. You can select the entries of that matrix in those finite field randomly, uniformly. And by doing that, you will create a code which with high probability will be capacity achieving. So just to be clear, when we talk about the field of coding, it's not because we don't know how to construct codes. We know how to construct with high probability optimal codes for a long time. It's really the field of decoding. We don't know how to construct appropriate decoders. So the second difficulty, the computation difficulty that consulting of the book is really what has led the field of coding. Now, this is then what explains this, proliferation of different codes and these difficulties around standardization that I mentioned before. And what I'm presenting to you today is an alternate view. And instead of having this pile of chips, we have a single decoder. What you see there is a picture of our actual first grand chip, which is the first one is for heart detection. And we are able to decode all of these different codes. This is vacating the need for multiple decoders and of course the need for standardization. This means that to a large extent, particularly as we are mostly concerned with so much shorter codes, again, let's go back to the case study of CA Polar codes which I mentioned earlier as being, a disorder atom for ultra reliable low latency is that you have very good decoding, ultra reliable low latency short codes. Okay, how does this work? Well, let's go back to what I had said before where you had this Y, which needed to have enough possibilities to accommodate both the possibilities for the X of which they're two to the NR and the possibilities for the noise of which with high probability is to the NHN. I'm dropping here the N in the H. So we're calling H here just the entropy of the noise. Here is how our algorithm works. We are going to take in the Y and we're going to take in the code, but the code is just a code book membership test. So think that rather than making a very clever code which is co-designed with a decoder, all we're doing here is we're using the error correcting code as a mere hash. It's just a hash, it's just a checking mechanism. We'll first guess the most likely noise effect. Again, not necessarily the noise, we're not interested in noise, we're interested in the noise effect. After guessing the most likely noise effect, we will subtract that is to say invert and that's what is shown there with that circle with a minus. It's just an inversion of the noise effect going back to what I said at the beginning. We really just mean that we can invert the noise that if somebody came and told me what the noise effect is, we would be able to get back the original information. We check whether that is a member of the code book. If it's not a member of the code book, then we go and repeat this, we go to the second most likely repeat this until we find a member of the code book or until we decide that we have made so many guesses that the noise was atypical and not worth continuing and it is not worth for us to continue to try to decode because as we go through more and more decodings, we're actually increasing the likelihood that we will decode in error. So this is a universal decoded suitable for all modern redundancy codes. The complexity here is not in terms of the code it's just in terms of the noise. So with this, the idea in terms of philosophy is that up until now decoding was really centered around the code that's the two to the NR possibilities just as in effect was suggested many decades ago by Shannon, but instead of doing that, we are actually going to the smaller problem the two to the NH to give you some idea of dimensioning say that N was say around a thousand then what would happen is you may say, well, this can be very large, but actually H, which is the effect of the noise in a typical terrestrial channel would correspond to a bit flip probability of say 10 to the minus four. In that case, if 10 to the minus four is a bit flip probably H is also the order 10 to the minus four which means that NH is less than one. So the typical number of noise patterns to investigate is actually very small unlike the number of possible transmissions, possible access, which is very large. And indeed we want it to be very large because we want our rate to be as large as possible. When we consider for instance, the current state of the arts say LDPCs it's often stated that LDPCs are capacity achieving which indeed they are. But if you look at the capacity of say current 5G channels it's usually because of the argument I just said, one minus remember it's one minus H of N. The H of N I just mentioned is around 10 to the minus three 10 to the minus four means capacity is 0.999. If you instead look at the rates that are being used in 5G for LDPCs, they're 0.6 something of that order sometimes lower seldom higher which means that you're using capacity achieving codes but you're not using them in a capacity achieving manner. So the fact that you're using capacity achieving codes given that you're very, very far away from capacity means that you're actually doing something quite wasteful and you're not using the codes in the way in which it's generally understood that they are beneficial. Here is just an overview of our hardware setup for the heart detection chip. I just want to give you a little bit of an idea of what this looks like. This appeared at ASS or a class tier and it won the best demo award when we demonstrated at COMSNETs earlier this year. What you see here is a chip where we're using the syndrome which is an effect or multiplication based on that K by N matrix that I mentioned before is the construction of the code that could be random. You can take that matrix and modify it to use it just as a checker. So the checking mechanism is very straightforward but we're only using syndrome as a check. And you can see here that you have, for instance, in this example, just to give an idea of some of the possibilities that are possible with grand where grand stands for guessing random additive noise decoding. It really should be random added invertible noise decoding but grand certainly sounds more pleasing that grind. What you see here is a primary circuit where what we're using is a very low frequency because most of the time we have to do very few guesses and only in the rare cases where we have to do more guesses because the noise realization is more complicated that only then do we bump up the frequency and by doing so we're able to be very energy efficient. And to give you some idea, we are able to support all code families, VCHs, read Mueller, which we mentioned before, CRCs which we're able to use as error correcting codes, RLCs, which up until now were not known to be decodable for error correction. Again, they are decodable for erasure correction. Polar code, CA Polar codes, it's universal. In comparison to the state of the art, which for hard detection was BCH, what you can see is that our average latency in microseconds, if you look at third from the bottom, in terms of the rows, you can see that we're much lower. In the last column, you see some work which was done by the group of Warren Gross and McGill University. That is a synthesis of our algorithm. So that is why there's no reported latency because it's not constructed. In terms of Pickerjewels per bit, we are very good, but we're not the top one, although I think we shall have some very pleasing news to report there in about a week. So please stay tuned and go to the webpage that I will mention at the end for some possible news there. But you can see also that the average power is the lowest of all of these. So very, very low power around three milliwatts. So with this, we're able to decode on the left, you see codes for which they already existed hard detection, decoders on the right, you see codes for which they were no existing hard detection decoders. They might have been soft detection decoders, for instance, for CA Polars, or for RLCs, they might have been no decoders available. In dashed lines, you see the theory, in full lines, you see our measurements, you see that they perfectly coincide. This means that we can do any code. So rather than being restricted with classic codes, such as Reed-Mueller, Reed-Sulliman, CRC-8 Polar codes, which are restricted in terms of code length shown here in the abscissa and in terms of rate shown here in the originate, we can actually use any code that we want, a random code, any old code that we wish, Grant is able to do them all. The benefits of being able to move to arbitrary codes are quite significant. Let me show you here some theoretical bounds. Here P is the probability of the bit flip, 10 to the minus two would be a very high probability of a bit flip, quite challenged, but not unseen setting in a terrestrial network, 10 to the minus three would be a somewhat challenged, but a more typical type of setting. And the block error rate, here's the probability that there was an error in the decoding. So on the top dash red line, what you see is capacity that one minus eight 10 that we mentioned before, and that capacity would be reachable if the code length, which is N again, given there, the abscissa grew. Please note that we're not starting at zero here, we're starting at one half. So going back to, keep in mind the kinds of rates at which people generally operate right now, I mentioned, LDPCs in 5G will be operating typically at around two thirds, one half. You can operate to three quarters on someone higher, or say maybe in some very rare cases, 0.9. What you can see here is, if we look at 0.6 as being pretty typical, is that with LDPCs, what you're doing is you're having the disadvantage of having very long multiple thousands of bits lengths and coming at rates which are, say somewhere around here at that 0.6, which is really below what you could theoretically do with shorter codes. The red line, the full red line is a converse. It means that for those finite lengths, you cannot get quite to capacity. You're getting much closer, of course, 0.8, 0.9, you're getting closer to capacity, but you're not gonna be able to get quite to it. Those blue lines are basically achievability bounds, but I shall go into exactly what the achievability bounds are. This achievability bounds was kept from a nice repository maintained by my colleague, he keeps a repository of nice bounds called Spectre. So we gratefully acknowledge using those bounds, but what we bring here is those dots, those purple dots. What you can see is that those purple dots are just below the red line, are above even these theoretical feasibility lines, just to be clear, those feasibility blue lines are not actual constructive mechanisms such that people can construct encoders and decoders. This is just to say there should be an encoder and decoder that does that, but it might be far too complex using the traditional code-centered approaches far too complex to construct. So here we got those lines outperformed through these dots. And the question is, what are those dots? Well, actually those dots are just random codes. And when I say random, I really mean random, we just picked out of a hat, that's what we did. You can see here random codes, CA Polo codes, BCHs with different block arrow rates for different rates. We can do 0.95, 0.98, 0.9, anything. And the coding region, it's not that the codes be short, it's not that the codes be low rate, is that the overall redundancy that N minus K that I mentioned beginning, not be too high. So here's a very, very comparable grand decoding region we've actually have made it larger. So you can do long codes as long as they're high rate. We can get beyond that by using a concatenated versions, which we're not gonna talk about today. But if you have for whatever reason a need for long or particularly low rate codes, which again, we are used to working at low rates for because of these LDPCs. But again, this is not a need. This is in effect something we have chosen to do, but we didn't have to. We just chose to basically give up a lot of our throughput. And I think it's very interesting in the context of ITU's remit, which was so nicely introduced by Alessia at the beginning. ITU is in charge of assigning spectrum. We're basically by using codes which are much, much lower than what we could do with capacity. What we're really doing is currently we're wasting spectrum in a very severe fashion by using these, rate one, half codes and so on. Okay, let's go now to the soft detection because everything I showed you is pill forward, it's hard detection. Remember, soft detection, we now have some color around these bits. We have some idea of their reliability. Let me show you here in the IQ domain, a simple constellation is just an eight PSK, eight phase shift king. So we shift the phase to go around a circle here. And what we would do when we're doing the detection is we would have decision regions which would basically be slicing the pie of this eight PSK. And if the areas are a little far away from being close to the region in between these slices, so those orange regions that would be here in this case that we would be more certain and then the blue regions would be more uncertain which are the regions basically close to the edge of these slices. How do we take that into account? Well, we're not going to take it into account by trying to describe the noise in a more necessarily finely sliced way. Really, the noise effect is still discrete. Either the symbol was what was sent or it's another symbol which is only a discrete number. The way that we deal with it is actually we're going to put that color, that subtlety, that reliability description into the probabilistic description of the flipping. And therefore that is going to affect how we select the most likely noise effect. So that is where the issue is and it's selecting that. And you could do it directly from the full soft value information but it's good for looking at limits of performance. We don't recommend doing that in hardware or Grant is a way of using soft information which I'm going to introduce to you because it's a very interesting, very statistical way of thinking of the noise but there are other variants. You can also look at just a single bit noise reliability, symbol reliability which is what symbol reliability as our grant does. Okay. So let's look at the log likelihood ratio are often used LLR of the Y which is basically this log of the ratio probability here if I'm just sending a zero or one, what say BPSK binary phase shift keying sending a plus one for one a minus one for zero. We're just looking at this ratio which indicates the reliability again of a call of a plus one or a minus one i.e. it was one or zero. So here's just in additive Y Gaussian noise you can see the different bit positions for a length 500 code and these are just, this is just realizations of that reliability. If we were to look at the full soft information which is what S Grant does and we were to look at how we guess with the full information, this is what would happen. So on the abscissa here, I'm just looking at the most unreliable bits nevermind the really reliable bits it's unlikely I'm going to flip them but let's just look at the first few unreliable bits. So here we're just looking at the first dozen. So the first line, the way to interpret all white means that none of them are flipped. The second line is a black, there's a black rectangle there that means that that first bit was flipped. Second line is a black rectangle on the second bit that means that the second was unreliable bit was flipped then third was unreliable then fourth was unreliable and then let's look at the next line which something interesting happens which is where the most unreliable and the second most unreliable are flipped. What this is saying is that it's more likely that those two unreliable bits were flipped rather than the single flip of the fifth most unreliable, okay? So sometimes it's more likely that two unreliable bits are flipped rather than one more reliable bit be flipped. What that means is that having weight which is related to the concepts of distance which have dominated the construction of deterministic algebraic codes is actually not a very good metric. It's not a very good proxy for how you should decode. Weight is not that important. And if you look at it again here statistically if you look at the query numbers of these different possibilities of flips and you look at the correlation, the Spearman's row it's a statistical measure of correlation between the query number, the optimal query number excuse me with full soft information and the having weight. So for instance, two means that there were two bit flips one would mean there was one bit flip. You can see that the row is not very good. There is some correlation, it's not zero but it's 0.36, it's not a good proxy. If you go back and look at this and instead take these bits and rearrange them from the least likely to the most likely and we look at it from a more statistical perspective we can see here different realizations so different rearrangements of these sorts of pictures that I've just shown you here for one realization so we can do this for many realizations. So sort of bit position for most unreliable to most reliable and we look at the reliability in terms of the log likelihood ratio we see a very consistent picture emerging so without worrying necessarily about the probability behind it but just looking from a phenomenological point of view at what's happening here we know that the more reliable bits are somewhat less interesting they're less likely to ever show up as being flipped now that it doesn't happen but it's certainly that's not my main problem my main problems are more around reliable bits towards the left so how do statisticians think? Well, the way statisticians think and this is really Ken Duffy's insight is they would map this to a line they would say a good model for this is a line that's to say that the reliability is somehow linearly related to the sorted bit position that is to say that if I order my bits from less reliable to more reliable the reliability is likely to be is likely to be linear in the bit position and in effect if you look at the log likelihood ratio the upholstery bit flip probabilities of course we know related to this LLR and if I look at the probability that the noise effect some vector nn is a particular realization Zn what we have is that that is proportional to the exponential and exponent with the sum of the LLRs of the absolute values of the LLRs and what's going to happen is these LLRs are going to be roughly proportional to the bit position that's what it means to have this linear relationship going if we go through the origin it's roughly proportional to the bit position and that means that it's going to at the LLR is roughly going to be some proportionality beta which gives me the slope of the line times I the bit position and if we do that then rather than having the hamming weight which as we saw was not very highly correlated we instead going to have a different weight so rather than just having the number of bit flips we're going to have the number of bit flips ponderated weighted by their position in the list of reliability of the bits this is what we call the logistic weight which this is what Ken has introduced in his ICAPS paper last year Ken Tuffy and so this logistic weight then allows us to decode so we can have again different block lengths different rates with different block error probabilities so here imagine a desired block error probably around is usually around 10 to minus three say 10 to minus four so let's think that this turquoise light green is sort of the desired region you can see that we can actually achieve this here it's with an SNL 9.8 dB for BPSK so bit flip correspond to about 10 to minus three so we see the advantage of going from soft from hard to soft to the sort of normal 2 dB advantage and what we can see here is that again on the same way as in hard detection we saw that there wasn't much difference between different codes that is actually also the case in self-detection but even more in self-detection the difference between different codes becomes less and less going back to the issue around not needing therefore to have standards you know one of the reasons for stanges was of course to be able to to have these dedicated decoders the transmitters and receivers talk to each other if you have a universal decoder that need is obviated the second possible reason for having standards is that they might be some codes which are particularly good so that even though there is a universal decoder so that there's no reason at the decoder to have a standard maybe they would have been a reason because some codes were particularly superior and therefore we could just transmit those codes and not worry about the others that is simply not the case codes are pretty much all the same as a matter of fact if you had to pick a code you would pick a CRC the Humble IP Free CRC a well-picked CRC has in our experiments always outperformed but just a hair but outperformed the other codes what you can see is RLCs are not that bad you know CA pullers are okay but again, nothing seems to be as good as a CRC we even did hear something a little fun remember as I mentioned that the best possible channel would be one where the noise place itself at the end it would place itself at the end and not bother anybody so you could just leave a bunch of zeros at the end this is what we did here we just left a bunch of zeros at the end and then we scrambled the whole thing by putting it through AES which is actually not an error correcting scheme it's just a, it's a it is a encryption scheme but it does a good job scrambling and by leaving a bunch of zeros at the end and just using AES as an error correcting code we again get a performance which is actually very close to design codes so in effect again, going back to the lack of reason for having a standard we don't need the standard to fix the decoder so there's no reason to pick you know if you pick a standard so that you have a dedicated decoder and the second reason which would be that some codes naturally are much better than others is also not the case I should say that this was a surprise to us we were pleased when we had our own decoder to think that maybe we could do code mining and find better codes I'll talk about that in a minute it turns out that pretty much there's no benefit to be had from a really extensive search for codes which is why also doing machine learning for encoding and decoding is also not useful it's also a waste of resources to give you some example here where you can see these are different CA Polo codes that follow the 5G standard what you can see in dash lines is existing schemes so here we use the CASCL scheme with a list size of 16 that was proposed by Tal and our late colleague wonderful colleague Alex Vardy who tragically passed away a few months ago and you can see also Polo codes interestingly Polo codes you know our performance matches the original capacity achieving Polo codes again as I was mentioning before with LDPCs Polo codes you know just because something is capacity achieving doesn't mean you want to use it particularly if you're not using it to achieve capacity Polo codes are the first provably capacity achieving codes as you can see they don't do very well which is why they have been replaced by CA Polo codes what we did here is we replaced all the Polo bits with just CRC bits and you can see that that does the best relative to CA Polo, CASCL decoders we either do the same for instance in the case of CRC 11 or we massively outperform by multiple DB as in the case of CRC 24 we can actually decide how many bits we have for the soft information we haven't gone much into detail here but we can change the number of bits I'd just like to point out that even with a single bit of soft information we can outperform CASCL with a more typical list size of eight deployed in most chips so we can do extremely well okay going back to what I mentioned before sometimes again people ask but really all codes pretty much good what if I use machine learning for constructing the codes now that you have this universal decoder this is not work that we did this is work which was done by the group of Batsuki steaming in the Netherlands and they went ahead and just used random code versus a state-of-the-art machine learning scheme for building codes and you can see that they basically do the same so there's really no point okay so I've shown you a new scheme for decoding again remember we talk about coding but we knew how to code since the late 60s, early 70s from Gallagher we were really code we should really have been the field of decoding we have shown you backward compatibility existing error correcting codes for any moderate redundancy codes whether standardized or not standardized future proof devices against introduction of new codes again the introduction of new codes would be far less useful given that pretty much all codes except for a few like polo codes but very very few codes don't do so well and this is really going to enable we believe at URLLC I would like to I actually should have changed this slide I have your code available soon we actually did a code drop yesterday of MATLAB code if you want to play with some of these things so I encourage you to go to the website granticoder.mit.edu there's some MATLAB code for grant and ORB grant the ordered reliability bits that explanation of how to use soft information that I gave you and I look forward with this to questions thanks a lot Muriel very nice talk as always I'm trying to get some questions and from the audience here unfortunately there's nothing so far but I will start to Q&A session and then hope some people will enter their Q&As I mean questions so again thanks a lot for the nice presentation now the issue is the following as you know we are trying we are getting all these use cases which are very highly ultra-reliable and also low latency requirements meaning the computational time is a problem especially when you do the decoding at the end and also some of these devices when you look at the IOTs for example there are very resource limited machines that most of the FECs are heavy weight as you know and like 20 years ago we worked on some LDPC and also BCH codes to show they work for the IOTs and now we have more strict latency requirements latency requirements use cases and how do you see shall we continue the classical FEC approaches or shall we reset our minds and come up with something totally different direction because we have all these new contributions like intelligent services they may help us to somehow tackle some of the wireless channel problems so what do you think about this? Great, great questions so I think that you know I think that's certainly considering much simpler codes you know CRCs or about the simplest thing you can do to code is useful I think that for the low energy low latency that you mentioned Ian this is a particularly good choice if you see here our power is very very low is about half but most importantly our latency and let me see if I can maybe just share I'm sorry I was going to annotate and having trouble finding the annotation button that's okay but if you look here at the third from bottom row you can see our latency is one microsecond which is far better than the state of the art in hard detection and the power which is seventh from the bottom the milliwatts row is also the best the decoding is very very fast and is very very simple when it is implemented correctly that's really good to see but you have also this family of the code codes right like from BCH that's right all these here right it doesn't really matter which code we use basically all that really matters for the code is the amount of redundancy and minus K this is really too good to be true in my opinion it's really good I mean if the results are like these it's fantastic but I assume these are only for the delays or latency addition by the FSC codes right like for example is 1.04 it's just a decoding time right so this is the decoding time correct which is you know it's a decoding chip that's all you can measure right you measure it going in and you measure it it adds up to overall end-to-end latency but it's really good you know fantastic now here's another question that I would like to ask I'm still waiting for some questions I think we have some questions in the Q&A yeah they should there is one but I think you already we discussed this Madhu Sathana that's right yes is it possible to reduce latency using Arduino C oh Arduino C yeah okay so that's a good question so we have not implemented this over an Arduino we went directly for the chip and I think you know this is a good question so first of all the code is fairly simple right obviously even simple things require caring cleverness in how you implement them to be done in an energy efficient and rapid fashion it's also very paralysable so I think that the latency reduction would be a very interesting aspect and be and doing it over you know over in effect an SDR or an FPGA the reason we went for the chip is in a way that is the ultimate that's the ultimate test of a technology right I'd also like to point out you know we did this over 40 nanometers technology we did not push all the technology so the the results that we're getting are really because of advantages in our advantages in our algorithmic thinking not because of advantages in the underlying manufacturing processes that we're using here's another question by Marzia Hashemi-Poor I told you in the beginning we have worldwide participants I'm not sure where he or she's from but so her or her his question is you mentioned URLC actually I mentioned but then you talked about right actually RM codes under RPA decoder have shown very good performance and can be a candidate for MTC and URLC I'm wondering how would grant for RM codes for such applications is compared to RPA thanks for the question by the way if you have more questions please enter them in the Q&A session Thank you, yeah so I think pretty much all codes do the same you know I think that that's the bottom line I don't think we should be fixing encoders and decoders is the bottom line I didn't talk about issues of fading or clumping of errors okay we have work where we show that if you have correlated errors we can actually do even better because we can take into account the correlation in the guessing process of the noise and why am I bringing that up? You know when you have just random bit flips or you've done or you've done interleaving you know pretty much all codes are the same when you no longer have that randomness of bit flips or you don't do interleaving and by the way interleaving is very costly from a latency point of view okay so one should avoid down the road interleaving we actually have a paper recent paper whose title is ditch the interleaver so I'll let you guess what the message from the title what the message of the paper is and the reason I'm bringing this up is read Mueller codes are pretty fragile on the bursty noise so I don't know about the RPA decoder but remember our decoder is optimum so I don't know what the RPA decoder does but since we're optimum you know presumably we do know worse remember we're provably optimum so I would be hesitant about going to read Mueller codes they're particularly for anything short they don't do well on the bursts that's what we've seen does that answer your question Marzia? I hope so he or she doesn't she can answer, yes Yeah, yeah so I would so you know all codes are the same under sort of white noise when you get to bursty noise CRC's, BCH's are very robust read Mueller, they have issues there was a question or some opinion by Hashem al-Bakoury he's already... Yes, Hashem, lovely, yeah Yeah, he says the link doesn't exist I don't know what... The grand decoder, the grand decoder thing Oh, maybe, yeah you mentioned that link I forgot that Yeah, it should exist we went on it recently but I will check right after this talk and apologies if you're having trouble finding it it does exist and I will double check to see that it's not down but I'll ask you to give me a second there Here's one more question from me, Muriel as you know like the last couple of years the direction for communication subjects are going towards more and more semantic communication especially like semantic errors like I call you know our classical way of error control is like syntactic errors right ones are zeros now we are going towards the semantic meaning using like you know logic or the first order logic etc and also from linguistics and ideas what do you think that you know we have to go with that because machine-to-machine communications they cannot just they will not be enough just for these classical error control schemes so what do you think how we can combine all of that like channel and source coding they already do that right that's by the way this is not new channel and source coding combination you know many years ago it was done but people are rehashing them from the semantic perspective so what is your opinion about the direction of the research yeah I think it's a very interesting direction I think that it's a little difficult to generalize just because again once it becomes semantic it becomes very context dependent you know so in a way the reason why it's useful is one of the reasons why it's hard to speak about it in in generalities you know I think that the general philosophy of being more is more noise-centric is if you will useful inherently useful you know exactly how that philosophy you know and looking at things from a more statistical perspective you know rather than taking you know what we've done up until now is we've taken very very simple models of noise and then over designed to them in a way you know we've designed with more care than the models deserved I think that you know if you look at this approach here just fitting with a line and by the way we have work where we show that you can fit with multiple lines you know to get a better performance you can fit to two lines as you can imagine right you can do piecewise linear and you don't have to go through the of course you don't have to go through the origin right you can shift you can you can have a piecewise linear approximation I think that you know this is philosophically a little closer to the kinds of statistical models that are required for for semantic communications you know so I think that the philosophy in a way is somewhat similar the exact realizations really then become very very specific to to the context you know it would be good to come up with something mixed like syntactic and semantic and trying to address the problem from both ends that will be especially for the machine to machine communications you know as humans we can always try to figure out what's you know what's going on but with machines robots and all that and things they need to you know not only check these classical ones and zeros but also understanding the semantics yes research yeah I know absolutely yeah Robert sir no no I was I was agreeing with you you know the the thing is for the time being what I see in the in the literature is again like this boxing you know box like you know they say oh we just look at the semantic but then they somehow skip the syntactic one so that somehow they should some you know mix them and try to come up with much more powerful yeah I mean I think that if you look at it from just you know a pure let's say mathematical perspective then what happens here is going back to to this picture okay just what happens is your maximum likelihood is no longer your maximum posteriori that's what happens yeah that's what semantic means right you no longer have that ml and map are the same so you can't take the noise into account in computing the map but then you have a different a priori that's that's really what it is that's that's the difference okay Maria thanks a lot unfortunately no more questions and I will ask Alessia to take over thank you talk to you about certain things again thanks a lot really appreciate it thank you we'll be in touch thank you a lot young for moderating the question thanks a lot professor Medar for this very informative talk so now we move to the wisdom corner life life lessons and which is based upon the idea to give a unique special angle to this webinar series adding a personal touch so successful researchers like you will guide students and young scholars in the field of current IZT research so I would like to ask you a first question which is your hard earned life lessons or failure that you would like to share with us today that might help somebody attending this webinar that's I think that's such a good question I'll point to something which was started by a set of students at MIT a wonderful wonderful project named FAIL and it's FIL you know exclamation mark and they ask people who you know whom other people deem as successful to come and you know give some of their most cringe worthy failure stories and and it was funny because I was a speaker at the first FAIL and I started by saying you know I was I was asked you know the organizer I was very happy with the organizers asked me and then and then they said I had been highly recommended for that and then I started getting a bit worried how do you get to be highly recommended how about a searing smother that's great um I think that very often younger researchers get discouraged because there's a lot of bias in reporting in the sense that people report their successes right and therefore you you see the successes and you extrapolate you say well whatever I didn't see in between this have also been fantastic and and the answer is it wasn't right first of all it wasn't um you know there'll be a lot of failures second of all um you know we have this vision that because eventually very good work often does get recognized but not always okay that somehow the recognition was fairly rapid that people were uh you know immediately pleased to see something new different that challenges status quo that is not at all the case okay I mean the more new and challenging it is the more people are going to tell you that you're wrong that you know you didn't think of this that you didn't think of that that you know it's too complex it I mean you know it's that you know it's you really have to persist you know um I mean grand's a good example you know I had people tell me oh but it won't work for that and you know I show them a curve I'm like yes it doesn't like hmm but it won't work for something else I'm like try it out you know so I think that you know definitely not getting discouraged um and if you really think that something works just keep at it you know I had very much the same um the same uh experience with uh with rlnc randomly network coding where you know as I mentioned the decoding there's very easy because it's just Gaussian elimination you know I mean the number of people who told me you know that's because you know I'm sure you can do better with routing or I'm sure you know and even when I had proofs that it was optimal I would still get a lot of pushback and and and eventually so you know what's the lesson to come out of this other than persistence you're not going to be able to convince people who are against something you can't all you can do is make sure that they're not listened to right and so rather than trying you know I used to spend a lot of time trying to convince people and then I realized it's really a lot of work and most of the time it doesn't work occasionally it does but you know it's not worth it like you know just keep doing your thing just show what you did do it well and eventually you know the people who are open-minded will convince themselves and the people who are not open-minded you could not have convinced them anyway so you know just to be less worried about that like I said the people who really have you know are are smart enough and intellectually honest enough to come around they will come around eventually they may come to you with questions you know but that's different that of course you should answer questions but the people who are just closed-minded they just don't either too closed-minded or you know not sufficiently you know intellectually flexible or you know they're not going to get it so don't worry about it. Yeah very clear thank you and well the second question is linked to what you had said already which strengths in particular in capabilities do you think students and young scholars should be most focused on developing and how would you suggest that they accomplish this? Yes I think that one of the things that's really useful for young scholars is to spend some time figuring out where their strengths are because you know it's what we sometimes call the sweet spot you know some people can be really good at going very deeply into something very narrow and making a big contribution there some people are better at doing more synthesis across areas and being cross-disciplinary and it's really part of how one is you know I think it's the same thing as if somebody was considering say becoming an athlete right you know you're going to have to work hard you know you're going to have to train but you know you should look a bit at your own preferences and your own body type to figure out what kind of sports you may be good at right and and it doesn't mean that you can't be maybe an excellent athlete at a particular sport it just means you really have to choose and I think that we understand that when it comes to being a very high-performing athlete but when you think of being a very high-performing a very high-performing engineer maybe we don't spend as much time we give more generic advice and I think you know spend time figuring out you know what's your research type you know where do you find that you can really do very well you know what things you can understand and you know what connections come to you naturally what things you naturally become very interested in doing yeah so sure discovering your passion your talents and also your style your style right you can work I mean you could be a runner but you could be you know you could be you know running the 100 meters and you could be running a marathon it's not the same type absolutely yeah sure enough and in which fields and then which topics would you recommend students to study nowadays I think that going in with a very open mind and actually just gathering a lot of different tools you know so I encourage the students maybe not particularly topics but you know classes to take you know take math classes take computer science classes understand how algorithms work um you know feel like you have a lot of tools because it's really hard to tell what's going to come I mean look when I was doing my phd you know my advisor told me okay you want to do wireless but your wireless is dead and I was like yeah that's okay yeah I don't care my mother taught ancient history you know I'm dead is I have to show myself it's terrible I always start then what a crap really it's dead so I have to disappear it's okay look I find it interesting if it works it doesn't work and I don't sound like if it doesn't work on something else I find a job you know you know I'll I'll get a phd engineer from IT somebody's gonna you know I'm a hard worker somebody's gonna hire me right I'll do something else it's okay um and you know that was not the case I mean the number of times I heard that you know coding was dead you know dead dead dead dead as a dodo you know I mean to some extent I mean that's a bit what I'm saying here but it's different because it's like you know decoding is alive you know so I think you know don't box yourself in look around get a lot of different skills and you know when when Ian was giving his very kind introduction you know I have a degree in math as well as degree in engineering I also have a degree in literature right like who uses that and you do you do because you use it in how you write how you think you know learn to think and and then you know most likely good things will happen yeah I fully agree because I have a degree in humanities as well and you don't see a lot even in the IT environment which is pretty technical I must admit to think you your your your your exercise to think yes yes yes I agree um I would like to ask you if you can tell us one of the most tangible contributions that you're made in your in your career that had an impact a direct impact on on your life on others life that you're most proud of that's a good question um I'm very proud of the work on the technical side that I've done with randomly in our coding it's now being deployed you know companies such as Barracuda which is a software defined one of the main software found by air networks has deployed it there's recently an announcement from Cradlepoint which is the sd1 branch of Ericsson you know and many others so that was a huge amount of work and to see that tech transfer which you know Ian in his introduction mentioned that to my being chief scientist first time with that that that's that was a really tough slog okay and again talking about people telling you you can't do it and having to keep doing it that that was a really really tough slog um so I'm really proud of that um I think on the general side I mean the mentoring is something where I think everybody can do it no matter what area you know you mentioned the mentoring award that I received from you know high school students for from our graduate association also recently received uh for from the MIT postdoctoral prodoctoral fellows association they gave you know the inaugural mentor award for postdocs that meant a huge amount so I think that those changes that you can make in somebody's career particularly when you're working with junior people and you're able to provide them you know it's like any you know we're engineers it's like you give a small direction correction of a few radians early on and it makes a huge difference in the trajectory so I think that those are very important excellent actually I was impressed by that when when I read your your bio uh and well not everybody can mentor by the way we also have a mentorship program at the ITU I had some experience a mentor not everyone wonderful and I actually was very impressed I was saying by reading your your uh your bio but your professional path also includes outstanding and dedicated work with with students and young scholars yeah those are the words that Ian Ian mentioned as well uh and for instance the one in 2013 at the MIT the graduate student association mentor award that you were awarded by the students yes yes that impressed me it gives a it gives a strong message and I was wondering so I'm sure you then the students learned a lot from you but I was wondering is there anything that you think that um um they they you learned from them actually oh yes yeah is it oh absolutely absolutely I learned your tongue from them uh I mean I think that very often you know in looking at when particularly people have big questions you know it forces you to yourself as well you know what's the essence of what you're trying to get at uh and and then you say wait did I think of it that clearly myself you know when you're doing different choices um so I think it definitely is is very very helpful um and and you know particularly when people are making professional choices to realize that you know the professional choices are really you know successful professional choices are the choices that are going to make them happy and successful you know uh and and it's very different from person to person it also I think gives you uh you you feel more empowered to decide to do certain things or not do certain things because you think they're important or not important regardless of what other people think um because you know when you're giving that advice you sort of feel then that well maybe I should follow it also so absolutely you learn a lot you learn a lot wonderful thank you I have last question um is there a motto is there an aphorism or or a book or a movie a piece of art or music that you believe describes you and you would like to share with us uh I think you know if you asked my family they would tell you what I always say and I usually say it in French it's big French which is office compu which is one does what one can and you know I really feel that that's you know we do what we can and then we just try to do a good job with it that's it you know uh but that's probably my most used aphorism yes nice very nice yeah oh thank you so much really thank you and if you want to come back with us on stage love to do that again uh I would love to express my sincere thanks to you Marielle and I think uh we should also mention again I I already mentioned that I think uh one of your best parts of your life is you raise so many kids and you have grandkids it's it's really a fantastic role model for many young ladies so it can be done again thank you for your excellent talk and uh conversation uh you always deliver what we expect like a couple months ago you gave another talk remember for the Balkan come yes yes and hopefully we'll meet this coming year have a nice time and merry christmas happy holidays thank you thank you thank you thank you everyone to have you at the ITU journal webinars and thank you Ian for your outstanding contribution thank you everybody and this is the last webinar of the first series that we have this year so um I'll I look forward to seeing you all online again next year for uh for our next series so thank you again everybody and thank you to my team thank you Erika thank you thanks everyone take care bye