 introduction. So I'm going to talk about continuous multiple codes for a change. It's a joint work with Dveshniko, Jesper and Eric. I'm the youngest one in the paper. So we'll start with some introduction to NMCs. We have to go through it because the setting is slightly stronger than in a previous talk. It's a game-based definition. Adversary picks two messages, M0 and M1, one of them gets selected at random, gets encoded, and the adversary gets to tamper with the codeword, and he will get one of the three possible messages now as the output of his tampering. He will get error if the tampered codeword is not valid. That's exactly the same as in a previous talk, but now there will be some differences. So he will get a special symbol same if and only if the codeword he's tampering with happened to be a fixed point of the tampering function. So he will get answer same only if the tampering function did not do anything. In any other case, it's again stronger, he will get the whole tampered codeword. So if the tampering is valid and tampering did change something, then adversary gets the whole codeword. This is a super strong experiment. That's the strongest possible experiment. And the goal of the adversary is obviously to guess the message. Okay. So this thing has a continuous variant, which is quite similar. It looks very much the same. We get the message, the adversary tampers, he gets one of three outputs, and then he gets to tamper again and again and again and again. And he can continue tampering as long as he's not getting error. Like once he tampers to not valid codeword, that's the end of the game. He has to make his guess. Right. The two notions that are important in the context of continuous NMCs are one notion is persistent, the other one is resettable or not persistent. The tampering looks pretty much the same, the first tampering, but then there's a difference. The second tampering, tampers on top of whatever was here, which is not a problem if F1 was bijection, right, if we're just tampering bijections, that doesn't matter. But if F1 was a constant function, then you're done, right? You lost information about the codeword, and every other tampering is a deterministic function of whatever you did with F1. So this notion is kind of weird in terms of continuous. The resettable notion is, I would say, slightly more natural. Every time you tamper, you can reset to original codeword and tamper again. Obviously, the first model is strictly contained in the second one. The second one is strictly stronger. In context of two-split state, the persistent CNMC was given two years ago at TCC. And, yeah, it's a much weaker model. Again, what we showed is that all you basically need, but just in a two-split state model, all you need is one super strong experiment, and it immediately continues. That's somewhat painful proof, and quite long proof, actually, but, yeah, that should show that the models are strictly, strictly different. Now, what's the split state? I was talking about a little bit. Well, the codeword is just split in two or more pieces. In this case, it's two-split state model. And each part of the codeword is tampered separately. Okay. So, some impossibilities. I'll talk about learning and the construction. So, first of all, it's impossible to continue to construct CNMC and two-split state model. And here, again, comes the difference between persistent and resettable. In persistent, it is indeed possible to give a construction two-split state model. In resettable, it is not. So, the model is that much stronger that you've gotten possibility, right? So, we are not able to construct CNMCs in two-split state model. We need more than two. That's why we use eight. The impossibility goes as follows. We've got two states, L and R, say the first bit of L is B1. And now, because it's two-split state model, it has to be secret sharing. And if it has to be secret sharing, it's indeed possible to pick three vectors, L0 prime, L1 prime, and R prime, such that, well, those two decode 2-1, those two decode 2-0. The tampering is split state. So, when we're tampering with R, we only see R. But that's not a problem, because we just overwrite it with R prime. And when we see L, we will overwrite the whole L with L1 prime or L0 prime, depending on B1. And what do we get? Well, first of all, we get a valid code word, because we picked L and R in such way. And second of all, the thing is it's a valid code word that the codes 2-1, if and only if the first bit was 1, and the codes 2-0 if the bit was 0. So, we can learn the first bit. We will not get the answer error. We will not get the answer same if we get the answer same as we are done immediately. But, yeah, we will learn the first bit. And then we proceed to learn the second bit, third bit, and so on. And then we swap around and we start learning bits of R. Right? And this impossibility shows a problem. We have in continuous and malleable codes that adversary can learn in very tiny increments, and suddenly he learned all the code work and breaks everything. And learning is unavoidable. Because think of it this way, you've got the space of a set of all possible code words at the beginning, right? You don't know anything. All code words are possible. Every time you tamper, you get a message and say in the first tampering, you got the output same. That means that the code word that is actually encoded is a fixed point of your tampering. That restricts your set. And that restricts your set even more. And so on. And so on. Right? And that's exactly what happened in the impossibility result. We started with the set of all possible code words and with every step we hafted and hafted and hafted. Okay. So now I'll give the construction. We are encoding message M. We are adding some zero to the K padding. And we are using the standard trick, the inverted non-malibu extractor. And we are obtaining three states. So those three states are uniformly random, such that if you plug them in to non-malibu extractor, you will get this. Okay. That's three states. Six more to go, or five more to go, if I can calculate correctly. The other three will be obtained from any valid encoding. So we plug in zero and something uniform. And again, use inverted non-malibu extractor. And two more states to go. This is going to be the crucial state. It's basically inner product of X vector of S vector. And the last state is this, which is basically a trace function. So it's an inner product of a smaller subfield. It's kind of important that the last state is the function of the previous one, but it's only for technical reasons, which I think are one of the intents overcome at some point. But whatever. This state will be crucial. It's like look for this one. I will talk about two games. We reduced the security of our CNMC to certain game. And we showed that if you could break CNMC, then you could win the game. And we showed that you can't win the game. Not surprisingly. I will not talk about the reduction from CNMC to the game itself. It should be somewhat intuitive if you look for this kind of expression, but I will not give the concrete reduction if you're interested. We can talk offline a bit. But, yeah, we'll start about games. First, I will introduce a toy game, which is significantly simpler version of the game, but it conveys the idea. So let's get to it. The toy game is as follows. We start with a square, and player can cut the square, one horizontal cut, one vertical cut, and he has to pick two fields that oppose each other. So either this and this, or this and this. Say he picked those two. Now the game, not adversarily, obviously, completely uniformly at random picks a point. If the point is outside of the field of the player, this is the end of the game. If the point is inside of the field of the player, we continue the game with a smaller field. You get to cut it again, you get to pick it, pick two fields, or we again select a point, and we continue the game, right? The goal of the game is to end up in a small enough area, right? So there is some threshold draw, and you should not be able to end up in an area smaller than some threshold draw. It's somewhat intuitive that you're not able to continue this game for very long, but stay with me. I'll give somewhat, maybe a little bit too complicated proof of it, but it's kind of cool. And it's important for the further game. So because there is no adversarily choice of the point, we can reveal our strategy immediately, right? If the point ends up in this zone, I'll cut this way. If the point ends up in this zone, I'll cut this way, and so on and so on. Indeed, you can look at the checkers and pick small enough fields, and you select, for example, this field. Obviously, you cannot pick anything in the same column in the row, pick another field, and so on and so on. Now, I'll show that you indeed cannot win this game. Let us denote the fields as A1 to A5. Think of AIs as less than one, so this is a fraction of this size to the size of everything. A3 is the fraction of this size to the size of everything and so on. And again, the goal is to get AI less than a row. And the key lemma is this. You can get it in this case from Kashi Schwartz. The sum of square roots of AIs is less than or equal to one. And how we use it again, I assume you actually want to win this game, so you cut it in a way that AI is less than a row. And what's the probability of winning this game or surviving this game? Well, it's the probability that the randomly selected point ends up in one of your fields, right? And that's exactly equal to the size of the fields. So, this is the probability that you will survive this game. Or we can write it this way, nothing clever. Now, we can substitute one of AI as a row, right? And now we can use our key lemma, getting this. So, the cool thing is that if the threshold is small enough, and particularly if it's negligible, then probability of you winning this game is negligible. Okay. So, now to the real game. The real game is somewhat a little bit more complicated. It's not played in a square. It's played in a six-dimensional cuboid, although we show it in any number of dimensions. And with the six dimensions, they're split into groups. Three dimensions are, say, left. Three dimensions are right. And all the dimensions are ordered, right? It will be played in six-dimensional cuboids. It's kind of hard to draw six-dimensional cuboids. So, we'll do with three dimensions. Say, we are observing the three left dimensions. They are nicely ordered, first dimension, second dimension, third dimension. And what the player can do, he can make a cut. Obviously, it's not a point cut. He puts a hyperplane in each point of the cut. So, this cuts it in three cuboids. And then what he gets to do is he gets to label each of the pieces by some nonzero field elements. Field is stated in the code. Okay. So, once he did that, say, in the second dimension, he decided not to cut it. He just labels everything as 42. And the third dimension, he makes a one cut and labels it three and two. So, now what he created is if you run the hyperplanes through each of the cuts, you've got six cuboids, right? And mind that each cuboid has assigned vector to it. One three-dimensional vector. So, for example, this cuboid, this has a vector 3, 42, 3. Well, this cuboid will have a vector 7, 42, 2. And so on, right? So, what the adversary did now is he assigned a vector to each cuboid he created. Now, if he does the same with three right dimensions, then what happened in six-dimensional cuboid, everything he created has two three-dimensional vectors assigned. Right? Okay. So, that's what the player does. That's how he cuts. He cuts labels. And now the question is how he selects his fields. So, he selects his fields by picking one field element. And all the cuboids that have a property to assign vectors to the cuboid, inner product to the C, are his. Right? We have to have some mechanism to prevent him to pick the whole thing. Okay? And the game goes the same way as in the toy game. The point gets selected. If it falls into cuboid of the player, then we continue within that cuboid. Again, you can cut and pick the cuboids and so on. And the goal is again to end up in a small enough cuboid. So, before we get to why this game is hard, some small intermission and few remarks, first of all, the toy game is the special case of this game. We get the toy game if we limit ourselves to two dimensions instead of six, and we limit ourselves to one cut and only one and minus one labels. Right? Why? Because if you got one and minus one, one and minus one, then the inner product, so in this case just the product, is one and one diagonal and minus one and minus one on the other. That corresponds to picking fields that oppose each other. Right? So, that's how you create the toy game. But the other way, it's not quite true because if you remember the toy game, once you picked the field and we made the cut, there was nothing in the column or nothing in a row. Right? So, once you picked the field and you made the cut, there was no other field that the player could own. In this version, if you look at one of the player's fields and you make a hyperplane cut, then the player is indeed able to own multiple fields within that hyperplane. Not all of them, but multiple. So, it gets to the very much denser version of the toy game, and it's going to be reflected in the next slide that it's actually easier to win this game than to win the toy game. But it's still impossible if the threshold is very low. So, the key lemma is this. Instead of 1 over 2, we've got 5 over 6. It comes from the dimension. And the proof goes very much similarly. That's the probability of surviving the game. So, the probability that the point is in one of your cuboids, where you split it this way, substitute 1 by row, use the key lemma, get through 1 over 6. Right? So, it's easier to win this game. And T dimensions, it goes this way. And, yeah, you get through 1 over T. Okay. So, a few final remarks. First of all, nothing is really cuboid. It's only epsilon cuboid. Nothing is independent. It's only epsilon independent. And, suddenly, in this case, it's a problem. Because if you've got epsilon cuboid, not cuboid, but epsilon cuboid, and you don't cut it, you epsilon cut it, then what you get is not epsilon cuboid. You get some epsilon prime cuboids. And those epsilons want to accumulate very fast. So, you have to control this on top of everything. So, you've got the reduction to the game. And on top of it, you've got a non-trivial part of playing with epsilons. And then we use this reduction to the game as kind of we show that adversary cannot learn too much about the code word. But that's not the whole story. Because, like, if I've got a code word and I know that the first bit of the coding of that code word is zero, I know only one bit of information. But it's one bit that's completely killing my application. Right? So, on top of proving that adversary cannot learn too much, we have to take care of the quality or shape of knowledge. In this case, I'm talking about shape because we are talking about the set of possible code words and we are controlling the shape of it. And then showing that adversary only knows the certain shapes. That gives the reduction to leakage argument. Okay. And we can skip that. Ah, yes. Thank you. Any questions? So, what do you use the field trace for? So, it's kind of the problem is if you have close to bijective tampering, it's taken care immediately by non-malleable extractor because of his properties. If you've got close to constant, then you've got nice reduction to the game. But in between, you kind of have a, but it's really a technical problem that when you tamper with, okay. In between, you kind of have to kill it nicely. And just the trace function gives you very clean argument to kill it because the trace function of this is a nice extractor with very nice leakage property and it has both of them in the construction. Yes, sorry. Okay. Both of them are extractors, but they've got different leakage properties because you're paying the penalty when you're looking at the leakage property of extractor, you're paying penalty of field over which you're taking inner product. And just simply in the case of this big XISI, the penalty is whole field, so it's like basically one third of the length of the vector. And that's a little bit too much to run the clean argument. You can run an existential argument, but to run a clean explicit argument, we need an extractor with better leakage property. That's why we just shrink the size and that's only it. That's to shrink the output of the extractor to get a better leakage. Thank you. Any other questions? If you know, is your construction constant rate? It's the same rate as Shin-Li. Okay. So it's this probably actually we could have, we did not think because there's a paper on constant rate to split state. Okay. NMC and probably if you plug it in, you should get constant rate. We haven't checked it. I'm sorry. So like your construction, like each is uneven size, right? Like each state have different encoding lengths. I mean, like the first six, seven states have the same size, right? And the last size, you can fill it up with zeros. That's not a problem. Okay. As long as we're checking consistency that it's indeed with enough zeros. Sure. Any chance you can think you can improve to seven? So to seven, I think yes, at least existentially. That's a quite good question. I think it can be with this, there's a problem of existential construction in general. We don't have any existential proofs. Well, now we know that one construction exists. So it's a problem of getting it. I think with playing with this construction and existential results on non-malibule extractors, we could get it to five. To five. Okay. So two is impossible. Two is impossible. And then we can get five. Five, six. I will not give my head, but I'm fairly sure that five, six should be possible. Three and four. It would be so, but it would be super cool if you could show that three is not possible. Okay. Thanks. Thank you. Let's thank the speaker again.