 Okay, now we have the last talk of today. Fabius will tell us about adaptive OD with access control from latest assumptions, please. So thank you for your consideration. So let me first start by giving you an example. So let us imagine that we are a resource center and we are working on GNA database. So it has a little problem is that GNA database are quite huge data source. So we want to outsource this, for instance, in the cloud. But then we have a lot of problems because not only is the database itself is sensitive data, but also the queries, because if, for instance, we access, if the resource center access the sample many times, it may give information about the specific sample, for instance, if it has a disease that the resource center is working on. So we also want Korean news transfer. And as I said, also, also database security, which means that uncurried entries should remain even. So this is exactly what we call the abuse transfer. So it's a two-party protocol which divides into two sets. So the initialization phase and the transfer phase. So at the end of the first phase, the receiver gets a negative version of the database. And after the transfer phase, we should obtain the message it wants. And the database holder, the sender, should learn the information. So it's interesting to work on because, first of all, it's what we call the complete building blocks for cryptography, meaning that if we manage to provide the efficient abuse transfer, we also obtain the multi-party computation, a secure multi-party computation for any functions. And also, the LED setting, which is the settings where queries may depend on produced queries is useful for sensitive databases, like I said, that the DNA storage for console connection data. So this story started in 1981 by the introduction of the concept by Rabin. And then some work that have been done on this, for instance, extension by Evan Goldreich and Lampen and also the proof that it's a complete building block for cryptography. And further, an adaptive abuse transfer, many works have been done. So for instance, it has been introduced in 1999 and the assisted execution framework paradigm has been introduced in 2007 and we use it in our work as I will explain later. Also, in the universal compatibility, some work has been done under Q-type assumption of parents. And also, and then, I mean, probably, it's an idea and the other. And also, access control has been enabled in 1999. And it's, this work relies on also on either pairing assumption and there's another generic way to obtain it, but it relies on the full power of FHC model to work. So the purpose of this work is to provide the first fully simulated adaptive abuse transfer from simple and less assumptions, which are short interior solution and LWA, which are two standard and well-known less assumptions where small interior solution is solving a linear system and find the small and interior solution for this inner system, non-zero. And learning with this work is indistinguish, shared utility of this distribution with respect to the uniform one. So, as I said, our scheme is pro-secure in the full simulation setting. So what is it? So it's a variant of the universal compatibility model. So basically, the difference with indistinguish shared utility-based security model is that instead of just having, instead of just comparing the view of the adversary, we will replace the adversary in an ideal world by an ideal functionality. And what we want is that the view of the environment remains the same in both settings, which means that the adversary cannot do more than what the functionality allows it to. So now, let us go on to the instructions. So we use, as I said, the asset execution technique, which is basically, the receiver gets a sender to obviously decrypt a message. So in our setting, how it works, is that the receiver will sample a mask for a one-time value. And we want to make one of the paper text. So that's such that it now contains an encryption of this one-time value. And then once the sender decrypts it, the thing is that it obtains only the one-time version. So with the message, you can guess which message it is. And then the receiver can remove the mask and obtain the message once. So on top of it, we also need a zero-net cross to make it secure. So we need to, first of all, to prove that the sender here decrypts a different type of text. So we need that the receiver will prove to the sender that this is indeed a reorganization of the correct message in the database. This is given by proof of knowledge of a signature of one of the messages. And also the sender has to prove to the receiver that it indeed made a correct encryption. So to do this, we need PKE, which is compatible with the zero-net cross. So to do that, we use the primary regress encrypted system as described here. So here, there is nothing new. It said that we will use the security version just to make our proof a little bit simpler. And then we use a similar proof for this but I believe that this later on. So, but this technique may have some problem because if we just use this regress encryption in order to make our proof that it has been, for instance, correctly encrypted, the only thing we can prove is that we prove that we use a bounded noise for this regress encryption, which leads us to this, for instance, for its attack scenario. So basically, if we imagine that we have a database composed of two messages, M01, and the sender is malicious, it will anchor it. First message with a small noise and the second message with a noise close to the bound. And once we're sending the randomization as I described in the previous slide, it will leave information on the norm of XI. So for instance, if this norm is quite small, we can imagine that it's M0, which has been randomized. And if it's quite big, it will mean that it's M1, which has been encrypted. So in this set, I mean, just doing like this is not enough to obtain our security properties. So to overcome this problem, we just like what is called the smudging of the volume. Where basically, we had a noise which is bigger than the bound on the LW distribution, on the R distribution, which is here, which will be here superfluently bigger. And it will lead to some problem that will expand it. But it's a solution which works. So now on access control, what has been done. So basically, what we had previously, and from this, we had an access control policy on the table text. And then the receiver has to prove that it possesses a value certificate to access this, I mean, so a given message. So many works have been done before on this. So some works only with contractions. And then this section can be obtained from replications plus zero-dash proof that it's indeed the same message that has been taken, that underlies different access policies. But it's not very efficient in some cases. Like, for instance, for threshold policies, because there, we have many distinctions. And it may prove useful, for instance, for biometric access or stuff like that. And also, there is some solution which handle expressive policies, namely, NC1. But it uses a fully secure AD to do this, which is also kind of costly to obtain nowadays. And some other works use a lot of hidden policy, but use a restricted version of CNF, of the connected number four. And in our work, we use branching programs to handle access policy. So a branching program is a succession of different levels with two different permutations in between all those levels, and a function that maps a bit of the input to a level. So for instance, here, if it, for instance, here, this word is accepted because it ends in the start at zero in the first level and ends at zero in the last level. But this word is not because it ends at three in the last level. And why we use this structure? It's because it's equivalent. If you have a pretty long branching program, thanks to Barrington, we have the next possibility as NC1. And so now, what you want to do is to prove that we possess a certificate for some hidden branching program in this list. And to do this, we use a start like argument of knowledge. So it works fine because when we look at the representation of a branching program, except for the function that maps the bit of the input to the level of the branching program, everything is between zero and four. So what you have to do is just to take this binary representation. And we have a short vector here to which we may prove no knowledge. And plus, we have to prove that we evaluated step by step. And to do this, we have basically two ways to do it. So in order, we can prove that at each step, we improve the knowledge of this x or a theta in a linear manner. Or what we need is to reuse the result from zero net accumulator from a linear model, which basically, instead of doing this linear search, we'll do a binary search, which drops down the communication complexity from kappa to me. This factor kappa to kappa. And this is done in this manner, using American trick. Or basically, to prove that the knowledge of this x or a theta, we just improve the knowledge of the path in the trick, which leads to this development, to this x or a theta. So the thing is that in the data setting, as I said, we use the Stern-like argument of the net. Because there is something. There is no equivalent of it. There is no interactive and expressive term that's close. So basically, there is no equivalent of gross side proof. Because lattices has less structure than pairing proof. So nowadays, what is the state of the art? You have basically two main proof systems, two families. So there are the new batches, key-like proof system, which are schnoll-like proofs on ring and the ring, which are quite efficient and takes advantage of the structure of the ring setting offers. But they are not very expressive. On the other hand, we have the Stern-like proof, where it works for any LWG made for standardized use. But they are combinatorial, so they are heavy. But we can improve many stuff with it. So how does it work? So initially, it was a seronage proof for syndrome-declining problems, which is basically an innovative short data solution. But instead of proving the knowledge of short eggs, we put the knowledge of an egg, such that it's any way it's fixed. And it's all in the binary setting. And some work from Kawashii and others allows to go from modulo 2 to modulo 2. And other works from Ingen and others allows to expand this to standard SIS and LWG statements. And for the last past two years, many works have been done to improve the versatility of Stern-like seronage arguments and use them in different protocols. So one of these works leads to a signatureization protocol, which is basically a signature scheme with two convenient protocols, which are, first of all, a two-party protocol, which allows to sign on the committed value at the end of the protocol. And the seronage proof that allows us to prove the position of a signature in a seronage manner. And the security of this scheme is important for the ability of the signature, security of both protocols, and also a limit which comes from the seronage proof. And it's interesting in many privacy-based protocols, because, I mean, for instance, it allows to enable eCache or also an eWiss credentials. And as I said before, we use for this construction, which has been planted in the Ejective Plastic for group signatures. So now let us go into our constructions, which basically uses everything I presented earlier, so the assisted Ejective Techniques of our signature. We handle access control in the printing program, and we use this seronage proof in the standard question. So first of all, in the initialization phase, we have everything I presented, like the signature scheme I talked with Haitian protocols here. We use this regiff, I mean, we use the recursion inclusion scheme, and I could take everything plus sign everything. And then we send the senders and everything to the receiver at the end of the initialization phase and prove that everything has been done correctly. Then we enter this transfer phase, whereas the receiver wants to access this message rawI. And rawI. So to do this, for example, if you want to type out, you want to omit the message using the smudging. I mean, adding this fluid term sends everything to the sender and prove that it indeed knows the rawI for one of the messages without revealing which one. Then the sender decrypt it and prove that it indeed did the correct decryption. Also using this signature reception protocol, and then the receiver gets the message once. So I didn't talk about access control, but I won't detail it, but it can be plugged into the scheme using techniques I presented before. And also another improvement that can be done is to, we can run a little bit on the communication plus using a fetching and transfer to optimize, to go to use the iterative zero-net proof instead of iterative proofs and wins on the communication plus and the round number of points. So in the end, what we got is the first adaptive access control, which handle expressive statements, which relies on this LWU, which is a separate module, which we want to avoid. I mean, we want to avoid these smudging techniques in order to work with standard LWU, which is really an inside module. And also, security proof have been done in a future simulation this model. So what can be some open questions? So I said avoid the smudging to work with normal LWU here. We can still improve efficiency. And another interesting question is, if you try to handle any circuit policies in this setting. So thank you for your attention. And if you have any questions, I'd be glad to answer. Questions and comments, anyone? We definitely have time for that. Maybe I've asked one. So let me get a basic question first. So do you need to add all the whole database first? So yes. Actually, it's not one in the useful processing. It has this initial phase, which has an inner cost. We're in the center of the data base. Because in the end, what we want to have is a transfer phase, which is quite efficient. So it's kind of a good product. It's the first phase, which allows efficient transfer. OK, so maybe one more general question, how far do you see how to generalize the whole design? I notice you will also use the standard 24th century efficient protocol. And I mean, the general question is how compatible are those things? Can I park in another, for example, essentially efficient protocol and try to take care of the access control part? In the end, I think yes, but the main thing, I mean, we still have to be careful on something that basically everything plugs together and plugs together well because these zero net codes are compatible with everything. So basically, if you want to change something, you also have to ensure that the zero net layer stays compatible with it, except for that TSS. Thank you. So further questions on how much? If not, then that's time for us again. And today's session, I think our chair will go here and some others.