 Anyways, thanks for the introduction, Ron. Thanks for all of you for coming. I'm excited to tell you about non-meleable codes for decision trees and more. So what is a non-meleable code? I'll tell you what the decision tree is about halfway through the talk. So what is a non-meleable code? So a non-meleable code is a randomized encoding scheme. So I want to sort of remind you all or introduce you to it. So this coding scheme, unlike most cryptography, it's completely public. There's no secrets whatsoever. And for this talk, everything is going to be information theoretic. There's no computational indistinguishability or anything like this. But so OK. So a non-meleable code is a coding scheme. And we're going to think about this experiment. Take a message, you encode it, and then you put it through this F. This is going to be a tampering function. You decode the tampered code word, and you output what you get. So we want two properties. We want correctness. We want that if no tampering occurs, you recover the message that you sent. It's a non-trivial storage of information. And we want some security properties. So we want that if you do tamper with the code word, you either recover exactly what you started with or something completely unrelated. What we don't want to happen is that X tilde here, the tampered output, is like X plus 1 or something like this. What we would like instead is that basically all the attacker can do is just delete what you put through the channel and put whatever he wants, something that has nothing to do with the message. So let's formalize this a bit. So again, we have the same experiment. So how could we formalize this notion of unrelatedness? So imagine we have a simulator, and the simulator is going to depend only on the tampering function. So it's going to flip some coins, and it's going to either output a special symbol same or some random message C. It doesn't fix message. It can output a distribution, or it can output many messages. But this is independent of any input. It only depends on the tampering function. And we want to say that this simulator, if we wrap the simulator and whenever we see the symbol same, we replace it with X. Otherwise, we just send through the message at output. We want that this distribution, this should be statistically close to the output of the distribution above. So this gives you the ideal real world paradigm. We're familiar with cryptography, and we want this notion of closeness to be statistical in this talk. And another way that we can frame this is we say that this experiment is statistically close to some distribution over identity and constant functions. So it is this experiment. So the function defined by randomly encoding tampering and decoding should be close to some distribution that over identity and constant. And it only depends on the tampering function. And we need some parameters. So epsilon is going to correspond to the distance. k is going to be the message links. And n is going to be the codeword links going forward. So some initial observations. You can't hope to achieve non-nobility against arbitrary tampering functions. So it's easy to see that you can always just decode tamper and re-encode. This attack will always work. So you have to limit your class somewhat. So you might consider, OK, let's consider a natural thing to consider is fix some complexity class. Like in cryptography, we often have p adversaries. But if you think about it a little bit, you can think about it on your own. If you have a non-malleable code against a complexity class, let's say it implies a very strong average case hardness bound against the complexity class. So these basically limits. If you want an unconditional non-malleable code, an explicit unconditional non-malleable code, it limits where you have hope of achieving such an object. Because basically, it implies really strong circuit lower bounds. So what sorts of codes do we have like these? There's a ton of work on other tampering classes. I'm just going to focus on things that correspond to natural complexity classes. So in 2016, oh, sorry, with Dana-Duckman-Soled, Mukul-Kulkarni, and Tal-Malkin, we constructed a non-malleable code for local functions. So a local function, it's a function where each output only depends on a few input bits. You can write the output bit as a function of, in this case, at most three inputs. And so we constructed these codes for local functions where the locality parameter is n to the 1 minus epsilon. It's like some small polynomial. Epsilon is like between 0 and 1, any constant. And this contains nc0. If you don't know what that is, great. It's fine. Sorry. Then the following year, Chateau Padier and Lee constructed a non-malleable code for small depth circuits. What is a small depth circuit? Small depth circuit is circuits of here depth d. And it contains unbounded fan in and in or gates and not gates. And you can always arrange them in this fashion if you're willing to pay a little bit in the depth. But not much. And so they give this construction. It is great using something called non-malleable extractors. But the codeword length is almost exponential in the message size. The following year, with Donna Dockman-Soled, Ciaoguo, Tal Malkin, and Liang Tan, we gave a new construction for non-malleable codes for small depth circuits where the code word length is almost linear in the message length. And it supports depth d up to log n over log log n. And surprisingly, if you can change this C to a big O, you would already separate P from NC1. But there's a problem with both of these constructions. And that's that the error is not very good. It's just barely negligible, basically. And also the size, it does not support super-like size circuits where we know circuit lower bounds, basically. So it's not quite as good as we would like. And in particular, if you fix this epsilon to be 2 to the minus lambda for some security parameter or lambda, then again, the codeword length is exponential in the message size. And so this is something that we were hoping to address with this work. And not in the message size, sorry, in the security parameter. So in this work, we prove three results. And the first one, we construct a non-label code with very good error, contrasted to what was previously known, which supports the same for small depth circuits of the same depth as before, and substantially larger size. And along the way, we construct two other codes. But they'll tell you more, one for decision trees and one for leakage resilience split state tampering. But they'll tell you more about that in a couple minutes. But first, I want to zoom in on this first theorem. So how would you prove something like this? How would you construct such a non-label code? So going back, recall the definition of the non-label code. So the definition says that this experiment is close to some distribution over identity and constant functions. So another way of viewing this introduced by some others is viewing this as taking a complicated channel and reducing it to a simple channel. The simple channel that we know how to handle, basically. So the encoding and decoding are the reduction, okay? Why is this? So formally, how would we say this? So right, E and D non-malibule reduces a class, tampering class F to G if E composed with any function F in the class composed, and then with D, maybe I'm saying that backwards, but should be statistically close to D, some distribution over functions in G. G is the nice class here, okay? And you also need this non-triviality requirement as well. You need to preserve this, that you're actually encoding something. And so why is this nice? So say we have such a reduction from some horrible class F to some nice class G, where we know how to deal with G. If we have such a reduction, and so say we have a non-label code also for G. We can just compose the two codes and then pictures, we know that the code applied to functions in G is a non-malibule code. And so the composed code is now a non-malibule code for F. And so it gives us this nice way of constructing codes. And so that's what we're going, and so all of our results actually, the reason we have these intermediate results is they're going to follow, all of these results follow from such non-malibule reductions, okay? So a bird's-eye view, what we're going to have is we're going to have this whole tower of reduction, non-malibule reductions from small circuit, small depth circuits to decision trees to these leakage resilient split-state or leaky split-state tampering functions to split-state and then split-state, non-codes exist. Okay, so let's zoom in, before I tell you what all these other things even are, let's zoom in on this small depth circuit thing. So this is a reduction from the prior work. And what the basically, the main idea here is we took, we noticed that in the circuit lower bound literature, you have this machinery switching lemmas for reducing small depth circuits to decision trees, okay? And so the thrust of this work was taking this machinery and turning it into a non-malibule reduction. So in the non-malibule setting, you can reduce small depth circuits to decision trees. Okay, so what is a decision tree? So recall this local function. So this thing, so each output depends on a few input bits. It's local, but these choices of the input bits that it depends on are statically made. So in the decision tree, this is dynamic. So on the right, we have a picture of a decision tree. And so each, to evaluate this decision tree, you know, it's going to be said probe the bits on the left like adaptively and reads them off and decides which path to follow until it gets to a leaf in this tree and then it will output whatever the leaf is labeled with, okay? So all of the outputs are behaved like this. They can make adaptive queries to the input, okay? The depth is the length of the longest path. And if you think about it for a minute, it's easy to notice. You can notice that if a decision, the decision trees of depth D have locality two to the T, okay? And also can be encoded by TD and Fs of size exponential in T. Okay, so in this reduction, we use that fact that I just mentioned that local functions sort of capture very small decision trees. This only works, this relationship only holds for T at most depth, at most log N. But the problem with this is this reduction that we had, so the sort of quality of the reduction depends on the decision tree depth that you're going to. So if you want to go to depth T, you're paying the epsilon you expect to get is T to the T essentially, T to the minus T. So in our work, because we didn't have codes for decision trees, we reduced to local functions. And the local functions, the best you can do in terms of decision tree depth, if you view them as decision trees is log N depth. So this gives you, this is where this bad error bound comes from because it's like log N to the log N. But in this work, we can construct non-measurable codes for decision trees of small polynomial depth. In particular, like approximately N to the one fourth minus epsilon, okay. So, right, this is exactly what we do. Construct these non-measurable codes and I should mention that these, we think these things are independently interesting. So it's not like this is a strict subset of small depth circuits. It's, I mean, well, it is if the circuits are large enough, but it's independently interesting in this parameter regime, okay. And the best thing that was known before was depth log squared N, but that doesn't seem consistent with what I told you, but that's actually from, if you convert your decision tree to a DNF, then you get that. And the only thing with comparable error is this code that we were using before for local functions. It only works for T of log N, okay. So how zooming in again, so how are we gonna construct these codes for decision trees looking ahead like we're going to have some more reductions, obviously. So, what was our starting point? So we were looking at this work from 2016 where we, with this code for locality, small locality, okay. And the key lemma is again a non-measurable reduction from local tempering to split state tempering. So I should tell you what split state tempering is. I've mentioned it a couple of times. So split state tempering is like the output of your encoding gives you two code words and you temper with them completely independently and then decode, okay. So the split state tempering is like this independent tempering and there's very good codes. This is an example from last year. Subsequent work, there's a recent paper that has a constant rate, but for their purposes this is fine. Right, so in this work, so it's there actually we actually constructed, it wasn't a direct reduction initially, it was this two-piece reduction. We reduced from local tempering to leaky input-output local where the dependence on both sides, we had this dependence graph for local function. It doesn't really matter. We had this two reduction way of getting to where we wanted to go. And then when we were looking at this, we decided that this was maybe the wrong way of abstracting what was going on. So instead we introduced this new abstraction where really we're viewing this code as being into the composition of two reductions instead where instead you're going from local tempering to what we'll call weakly, leaky split state tempering. It doesn't, and then from there to split state tempering. Okay, so what is this weak leakage tempering? Basically the functions are allowed to read a couple of bits at the beginning from the other side, and then they have to temper independently from then on. Okay, so the reduction for this, from the leaky split states or this weakly leaky split state to split states is very simple. It's just you use some secret sharing techniques. The meat is in the left hand portion of this. Okay, but why was this useful? Because we noticed that when we thought about it like this, if we used similar techniques, not to the same, but if we use similar techniques, we could push things a lot further by via this abstraction. If instead of this super weak version, we had before of leakage, if we use a stronger version of leakage, then we can push up to a much stronger tempering class in terms of decision trees. Okay, and so what we did is we constructed two new reductions, but first I need to tell you what this unfortunately actually, I won't be able to tell you much about either of these reductions, but I will tell you what this leakage resilience split state class is. So in the think of Alice and Bob as each getting as input a code word. Okay, and they're going to be tampering with these code words. And but the leakage is basically, they're allowed to communicate before having to output tempered code word and they're allowed to communicate delta n over two bits. So they each have a code word of size n over two and they have to, they can communicate delta n over two bits and then they have to output a tempered code word, whatever they want. They're completely computationally unbounded. They're only bounded in communication. Okay, and this has actually been studied in the past. So, Agarwal et al. constructed these objects from split state codes with special properties. Chateau Padier and Lee constructed these things via split state extractors, non-medical extractors. And in this work, we give a new construction. It's in our view much simpler. And it's also a black box reduction from this leakage resilience split state to split state. So it works with, unlike the former example, it works whatever split state code you start with. And it gives you better parameters in terms of leakage or at least explicit parameters in terms of leakage. Okay, so these are our results. And unfortunately, I will have to point you to the paper if you would like to learn more of the details because I don't have a ton of time. But some open problems, and I'd like to leave you with because I have a ton of them if you're more interested. But we have this non-medical code. Our non-medical code only works for decision trees up to slightly less than a quarter root. And this, we don't really understand why this seems sort of artificial. And so getting a non-medical code against larger depth decision trees we think is very interesting. And second, the problem I'd like to leave you with is in this work for absolute, so we achieve this epsilon that's two to the N to the one over D. Okay, so for constant depth, this gives you something that's polynomial in the error parameter and the security parameter. But this isn't quite, so it's like for very large circuits this is more or less consistent with the best things that are known in circuit lower bounds. But for small circuits it's a bit different. So the best, the strongest result that's known is that epsilon, the strongest average case lower bound for small depth circuits is that if epsilon needs to be, is it most two to the N over polylog in D over of S, which is the circuit size, it's at most this correlated with parity, which is for small circuits, if you have a poly-sized circuit, then this means that for much higher depths you can, it seems intuitive that it should be possible to get much better dependence on the security parameter. And yeah, I'll just leave you with this picture. So right we construct these inner two reductions and give new codes for the top three classes. Thank you. So let's thank the speaker again.