 Thanks for checking out our recording. I'm Jessica and today I'll be talking about joint work with my advisor Daniela Michancho on simpler statistically asunder private oblivious transfer from cyclotomic integers. So let's start by defining at least a few of those words. Oblivious transfer is a core building block for secure multi-party computation, but in this case we only have two parties a sender and a receiver. The sender is given as input two messages m0 and m1 while the receiver has a single bit of input. Oblivious transfer allows the receiver to obtain from the sender the message corresponding to the receiver's bit without needing to reveal its bit to the sender and without learning anything about the other message. An oblivious transfer protocol should satisfy like a basic correctness property which really just says that if an honest receiver and sender engage in the protocol the receiver should end up with the right message with all but negligible probability. We'll also have security properties for both the sender and the receiver and perhaps unsurprisingly given the title of this talk will be interested specifically in achieving statistical privacy for the sender. This means that no matter what the receiver does in the protocol the outlet of the sender is statistically close to a distribution that is independent of one of the messages. In other words that message is statistically hidden. We also want to ensure computational privacy for the receiver just to ensure that the receiver's bit that the bit it uses to select m0 or m1 is also hidden. We can't both be statistically hidden so we'll just have computational privacy for the receiver. Statistically sender private oblivious transfer protocols have already been given from a number of assumptions like Decisional Diffie-Hellman and Quadratic Residuosity. As far as constructions from lattices are concerned Piper, Frank Schneith and Waters give the first lot of space oblivious transfer protocol that satisfied universal composability security by way of a brief background that's a stronger definition of security than statistical sender privacy but one that also provably requires a trusted setup procedure to be invoked before the first execution of the protocol. Berkersky and Dottling gave the first lot of space statistically sender private oblivious transfer protocol and we take their work as a starting point for our own. Subsequent statistically sender private oblivious transfer constructions were given from a possible fully homomorphic encryption schemes. So one might wonder like what remains to be done here on the lot of space statistically sender private oblivious transfer front? The answer is of course to improve efficiency both in terms of computation but also in terms of communication between the parties. So this table here is sort of just like a brief side-by-side comparison of the protocols I mentioned above and so you can see that in this work we're able to bring down the total communication throughout the protocol by at least a factor n log n compared to other works and also improve computational efficiency at the same time. We follow the lossy encryption approach to constructing oblivious transfer that's been described in I think amongst other places Piper, Frank, and Nathan Waters as well as Berkersky and Dottling in their own protocol. Lossy encryption at least in our context requires generation of public keys in one of two modes would be a lossy and lossless. In the lossless mode the public key should function essentially as expected. Messages encrypted under it should be decryptable with the corresponding secret key. Now lossy mode however encryption with respect to the lossy public key loses information about the message thus statistically hiding the encrypted message. It should also be the case that lossy and lossless keys are computationally indistinguishable and that one can be efficiently derived from the other. So in our little cartoon here we can see that we can efficiently acquire a lossy key from a lossless one by just like vertically reflecting it and then you know back to the direction by another vertical reflection though in this cartoon I guess they're really only computationally indistinguishable if you're like very tired. Just a cartoon. Armed with such a lossy encryption scheme we can now design statistically-centered private oblivious transfer as follows. If the receiver's bit is a zero it's going to generate a lossless public key otherwise a lossy key. The sender will encrypt its message M0 with respect to the public key and then transform the key into the alternate mode and encrypt the other message M1 with respect to this key and both of these encryptions will be sent to the receiver. This way we know the appropriate message will be decryptable and the unselected message will be statistically hidden and then the receiver's privacy will just follow from the computational and indistinguishability of these keys. So before we get into construction of a lossy encryption scheme let's introduce some important lattice definitions that we're going to be using. So A lattice is a discrete additive subgroup of RM. So given a basis B we define a lattice lambda to be the set of all integer linear combinations of these basis vectors. We're here we're letting the the rows of B define the lattice and every lattice has a unique dual which is a set of all vectors that map the primal back to the integers. The geometric relationship between the primal and dual lattice will be what we like really lean on in this work though so I want to sort of like harp on that a bit with a nice like 2d example borrowed from Oded Regev's course notes. So here our primal lattice is dense in in one dimension sort of in this like this x-axis direction but it's sparse in the other right in this like y direction and the dual lattice has as you can see sort of like the reciprocal geometry right so it's it's going to be dense in the directions where the primal is sparse and vice versa. It should also be noted that a basis for the dual the lattice can be efficiently computed from a basis from the primal so maybe given our little cartoon of lossy encryption you can maybe see where we're we're going with this whole duality thing. So with these definitions we now talk about how we'll approach lossy encryption. So given a basis for lambda we'll interpret our message as a vector and use it to select a lattice points. We'll then perturb this this lattice vector with discrete Gaussian noise of parameter sigma and this perturbed vector is what we'll return as our encoding. So for a fixed noise parameter sigma a sparse lattice will allow recovery of the message m at least assuming maybe you have some auxiliary information like a a short basis for the dual. But if you consider the effect of noise on denser lattices I mean this picture is sort of so like the same density but larger error rate but you can these are sort of the same. So right if we consider the effective noise on denser lattices lattices with many short vectors you can see that the same amount of noise sort of like smooths out the discrete structure of the lattice until eventually you have something that's very close to uniform over the space. In this case even like maximum likelihood decoding isn't going to work for you to recover your message m at least not with like any sort of useful probability. So with this encoding approach in mind here's the skeleton of protocol that sort of at this level it's actually like the same as our protocol and the protocol in Bruchersky and Totling and it will sort of like flash out the differences later on. But here's the pseudo code so our receiver is going to start by sending either basis for a sparse or a dense lattice depending on what its bit is and then the sender will encode its first message with respect to the primal and its second message with respect to the dual and it will just like return both of these encodings. And then the receiver will decode with respect to the appropriate lattice and you know we'll recover the correct message because we can see here oh if we we send to sparse lattice then encoding with respect to the primal is like the the right thing to do it allows us to recover but this this dual will be very dense because of the sort of reciprocal geometry and we won't actually be able to decode. So this is sort of like the idealized sketch of our protocol. So the question and like is this actually this is actually work great this this is very much a sketch and like is is the lossiness that we're guaranteed actually enough right so yeah what happens if we have a cheating receiver that sounds some sort of weird lattice that isn't completely sparse or completely dense notably our 2d example from before right we were sparse in one direction dense in another and so neither the problem of the dual is really like completely dense sparse and this is where our work departs from that of Brackersky and Dottling. So we turn to algebraically structured lattices to guarantee some amount of polarization in the number of short vectors and therefore also the density of the lattices used for for a lossy encryption. So we work over rings of integers of cyclotomic number fields for simplicity we can just restrict ourselves to power of two cyclotomics and we recall that these modules embed as a lattice in ZN under the coefficient embedding where we just take an element of of the ring and it bedded as a vector by like writing down its its polynomial coefficients. So given a matrix B over elements of R we can define the Q area module lattice as the lattice embedding of the module generated by B by the elements B modulo Q. So this is going to be a lattice that's periodic modulo Q. Crucially for us it's the case that these Q area module lattices over over R not to mention N are guaranteed if they have like a short vector to have N of them. So like specifically the the lengths of its N shortest linearly independent vectors are all going to be the same again and here is the sort of dimension of our of our ring that the module's over. So this means we can't fall into some case where we only have like one or two short vectors in our lattice and therefore only sparse in like a couple directions or like only dense in a couple directions I should say. There will be either new short vectors in our lattice or there will be at least N of them. We're not quite guaranteed a full rank of short vectors here so we will have to take some additional steps to ensure one of the two messages is completely statistically hidden in the setting. So now that we know a little bit more about module lattices it's like fill out some more of the details of our protocol. So right here again where the receiver is going to send a basis for either sparse or a dense now our module lattice and right so rather now now we have to do something a little bit fancy right so we don't want to rely on the encoding directly for lossiness in the case where input lattice might be dense. So we're instead going to encode not the message itself but a random vector that we're then going to use as input to a random mis-extractor and we're going to use the output of that random mis-extractor as a random mask for our message. This will allow recovery of the message whenever decoding is actually feasible because we can just decode you know we're going to reveal the encoding and so we can we can decode the input to the random mis-extractor and then just rerun ourselves but if we weren't able to decode but we didn't have like that our message was completely statistically hidden we only had some sort of like good min entropy guarantee for encoding that's still going to be enough for our purposes. So we'll we'll be good in this case and then here we haven't really changed what we're what we're doing with the the other message we just sort of you know have this uh non-formalized encoding algorithm that we're running on it with respect to the dual lattice. Well so to show privacy or at least to show statistical center privacy we need to prove that either m0 is statistically hidden which requires that the conditional min entropy of x given its encoding is large or m1 is statistically hidden by our like under specified encoding procedure. So to formalize the the lossiness of encoding we're going to restate the procedure in a way that enables us to use known regularity lemmas for modularity. So we denote by a row of lambda the the total mass of the Gaussian function that falls on our lattice lambda so you just sort of like sum up all the Gaussian mass that's on a point of our lattice and we'll want to make use of this like a weird quantity the smoothing parameter of a lattice which is the smallest real number s subset Gaussian with parameter one over s places weight less than one plus epsilon on the dual lattice. As intuitive as I'm sure that definition is perhaps more usefully it can also be thought of as the minimum parameter the Gaussian that smooths the discrete structure of lambda to an epsilon of uniform. So we sort of saw this earlier with our like progression of you know increasingly large error on a lattice at some point we're going to end up with something that's like just uniform over the whole space and that's also what this parameter is capturing. So a large smoothing parameter means that the lattice is sparse in at least one direction and a small smoothing parameter means the lattice is dense in all directions. It's maybe useful things to observe. In their ring learning with errors toolkit paper Lubyshevsky, Heikert, and Regev show that for generators B of a monogel lattice if a vector x is drawn from a discrete Gaussian parameter sigma which is greater than q times the smoothing parameter of the dual lattice then the matrix vector product of these sort of like basis vectors the generators I guess and this Gaussian vector x will be epsilon close to uniform. So right this is sort of saying like if we had some fixed amount of error and our dual lattice had a small smoothing parameter then we could sort of example from this fixed error distribution and be guaranteed to get an output that's close to uniformly random by just computing this matrix vector product here and this is what we're going to use then as our encoding method take the results as a sort of approximately uniformly random mask for a message. So if lambda has no short vectors and sparse everywhere its dual is dense everywhere and so has small smoothing parameter and the regularity lemma can be applied. On the other hand if lambda has any short vectors then we know from this sort of like a structure of our module lattice that it also has n of them right so this is going to give us some sort of min entropy guarantee or conditional min entropy guarantee for x or a random vector even in the presence of its encoding. So we'll give us sort of like a proof sketch of that and let me just like remind you again of whatever encoding method is. We're going to sample some sort of random vector x as well as a discrete Gaussian and we're going to use our random vector x to select the lattice point we're going to perturb it take it mod q and then this is this input x is what we're going to give to our randomness extractor and then we'll also send to the receiver this encoding y right so what we want to show is that x sort of in the presence of y still has high min entropy so a similar proof technique as that shown in bd18 can be applied and we're going to sort of sketch it briefly here. So imagine like maximum likelihood decoding of x given y because e is drawn from a Gaussian the most likely x given y is the one that minimizes e so this immediately means that the most likely x is the lattice point that's closest to the encoding y and this also sort of means that the error vector must have fallen in the Voronoi cell at the lattice. However if our lattice has many short vectors many linearly independent short vectors then the probability that our error vector fell in the Voronoi cell can't actually be too large right so in this case there are like many densely packed shifts of the Voronoi cell along some subspace that also capture significant amounts of Gaussian mass and so there's not going to be like too much mass concentrated just in the Voronoi cell. Monetatively we can bound the probability of e falling in the Voronoi cell by like inverse exponential and n and this suffices to show that x must have high conditional min entropy given y and so that's that statistical center privacy right there. Computational receiver privacy ultimately follows from pseudo randomness of LWE because we can use LWE generation for our keys for a lossy encryption system and so that's it for privacy if you put the sender and the receiver. So to summarize we present an efficient statistically sender private oblivious transfer protocol from our module lattices. We use the structure of these lattices to get improvements of efficiency that sort of exceed what you would like immediately expect from moving to these more structured lattices and we still like do get this communication overhead though right we get like a log lambda like lambda is your security parameter communication overhead for communication between the sender and receiver compared to the actual like bits of information that are being exchanged. So a sort of open question is is there any way we can drive this down? Is there any way to like actually get like constant overhead that would be a really appealing next goal. So thank you very much again for for sticking around and I hope to see you in the comments.