 Alright everyone, welcome to the session on functional and homomorphic cryptography. And let's just start right in. So our first talk is on multi-key homomorphic authenticators. It's by Dario Icaterini, Luca and Elena from IMDEA and Chalmers. And Luca will give the talk. Okay, thank you for the introduction. As Sara said, this is a joint work with Dario Fiore from IMDEA and Icaterini and Elena from Chalmers University. So first of all, let me describe the problem of homomorphic authentication. What we have is that we have a user, Alice, who has some messages, and then we have the cloud. Alice wants to outsource the messages to the cloud, and then later on there is another guy, Bob, wants to compute a function over the messages that Alice is used to store to the cloud. And then basically the cloud is answering back with the result of the computation, and Bob wants a way to be sure that the message was computed correctly. So the solution is we give Alice a secret key, such that she can authenticate the messages so that the messages and the authenticators are stored into the cloud, and then we provide the cloud with a magic machine that uses an evaluation key to produce an authenticator of the output of the function, starting from a bunch of authenticators which are provided by Alice. So this authenticator of the output can be then attached to the output itself, and then Bob has a verification key with which he can verify the computation and either accept or reject. So as you can see, we can have two kinds of homomorphic authenticators, the ones which have public verification, and in this case we talk about homomorphic signatures that were introduced and formalized by Bonnet and Freeman in 2011, and in case of secret verification we speak of homomorphic max, which were introduced by Jenner and Weeks in AsiaCrip 2013. So from now on, I will maybe mess up the terms authenticator and signatures, but since the only difference between signatures and max is the fact that the verification is either private or public, it's not a big error. So if we want to be a bit more formal, we can think of an homomorphic authenticator as a bunch of polynomial time algorithms. The first one is the key generation, which takes a security parameter and outputs a triple of keys, a secret key, an evaluation key and a verification key. Then we have an authenticator algorithm, which takes the secret key, a message identifier i and a message, and outputs an authenticator for the message. We have an evaluation procedure, which is the magic machine that we saw before in the picture, that takes the evaluation key, the description of a function, a bunch of authenticators and an output, an authenticator for the output of the function over previously authenticated data, and then in the end we have a verification procedure which takes a verification key, the function, a message and a signature which wants to authenticate the message as the output of the function and either accept or reject. So which are the properties that we want for such a cryptographic tool? First, we want correctness. And what do we mean with correctness? We mean that basically if we have authenticators which have been computed honestly, if we feed the evaluation procedure which debt authenticators, then the output of the evaluation procedure has to pass the verification. Moreover, we want security. So basically intuitively we can say that without the secret key an adversary cannot produce a valid authenticator for false results of the function. And last but not least, we want succinctness. What do we mean with succinctness? We want that the size of an authenticator in the size of the data set. So what's the problem? Because this is a cryptographic tool which is widely used and studied. There are many constructions but we have a problem. The problem consists in the fact that the evaluation procedure so far deals with authenticators which have been authenticated just with a single key. And the question we ask ourselves and we started answering in this paper is how can we authenticate a computation which takes input from multiple users? So let me describe the situation pictorially since I think it's more easy to understand. We have a bunch of clients. Each of them have his own messages and then we have the cloud. All the clients want to store the data into the cloud and then as before later on there is Bob who wants to compute a function which is the input which comes from the different users. And again the cloud has to provide the output of the computation and Bob wants to be sure that the output is correct. So the first solution that we can think about could be like give the same secret key to all the users such that they can authenticate the messages with that secret key or all the authenticators in the cloud. So basically we have a magic machine which performs the evaluation procedure, samples the signature which are of the messages 1, 2 and 3 from the first, the second and the third user respectively, outputs an authenticators of the result, send the authenticators to Bob and Bob with the verification key either accept or reject. So basically here nothing is changing with respect to the single user framework which is the problem. The problems are mainly two. The first problem is that from Bob's point of view the three users are basically the same users. They cannot distinguish. And this is practical and philosophical concern that Bob can have since he cannot distinguish moreover if one of the users is corrupted then the whole system is compromised. So which is the solution we thought? Basically intuition is we are going to give each user his own secret key such that each of them can authenticate his messages with the proper color and then when Bob wants to compute the function the cloud is able to give the result of the computation to Bob and then there is a special procedure which now takes another key which has the three colors, one from each user and can basically combine authenticators from different users in order to get an authenticator for the computation computed over inputs from different users. So such that Bob with a multi-color verification key can either accept or reject the result. So basically here what happens if someone is corrupted? Let's assume that the last client is corrupted. What happens is that basically we can still compute f over inputs which are provided by not corrupted users. And so all the process is running again and Bob with the same verification key can either accept or reject the result. So let me summarize our contribution which is basically an outline of the talk. First, we provide the first suitable definition of multi-chemomorphic authenticators. Second, we provide the first construction of multi-chemomorphic signatures which are publicly verifiable and these two points will be like the core of the talk. And finally, we provide the first construction of multi-chemomorphic message authentication code which I will not have time to cover in this talk. So first of all, let's analyze the primitive. What do we do? We can deal with computations which takes data from different users and how do we intuitively solve this problem? Basically if we consider the circuit which describes the function that we want to compute we divide the different inputs assigning them to the different clients and how do we do that? Basically we take each wire of the circuit and we put a special label which has two sub-labels. The first one which identifies the client who has to put the output and the second one which identifies the message which is gonna be put into the circuit. So basically I want to stress this point each label now identifies both an identity which is a user and a messenger. So how can we more formally describe the primitive? Basically we have a setup phase that on a security parameter outputs some public parameter. We have a key generation algorithm which takes the public parameter and outputs a key tripod, a secret key and an evaluation key and an verification key. Here you can notice that the keys are white and not colored like before. That's because the key generation procedure and that's one of the key points of our construction can be run independently by the users. So basically each user can generate his own keys and use to sign and to evaluate stuff. Then we have an authentication procedure which takes a secret key, a label and a message and outputs an authenticator and evaluation that now takes the description of a function, a bunch of authenticators and evaluation key which depends on the users which are involved in the computation of F and outputs an authenticator which certifies that the output of the function is correct. Then we have a verification procedure which takes the function and a bunch of labels along with a bunch of verification keys from the different users involved in the computation, a message and a signature and accept or reject the message checking the validity of the signatures. So again, which are the properties that we want to achieve? First, we want authentication and evaluation correctness. Similarly to what I told you before, we want that any input of the authentication algorithm and any input of evaluation algorithm over fairly computed authenticators pass the verification and then we have succinctness again. But here, we define succinctness as the requirement for which the size of the authenticator has to be logarithmic in the size of the dataset and linear in the identity space. So basically what we want to say is that a signature is succinct if it's logarithmic in the dataset size and linearly in the number of identities which are involved in the computation. It could sound like a weird requirement, but basically if we consider computations where we have a few users with many inputs that succinctness definition sounds pretty good. So why I'm stressing that much on succinctness? Because without succinctness requirement, building this kind of homomorphic authenticators becomes trivial. And now can we do it? Imagine the cloud that has a bunch of messages and authenticators from different users and imagine Bob that wants to compute always the same function over messages from different users. What the cloud can do is basically parse the message and with the respective authenticators that Bob wants to use for performing the computation, send everything to Bob. Bob can separately check using the different verification keys if the messages and the signatures pass the verification procedure and then compute the function on his own. So what is left is modeling security. And to modeling security, we assume that the cloud is malicious now and again the framework is the same as before so he has access to this magic machine which can evaluate signatures from different clients and what it can do is that it can make queries to the different users obtaining authenticators on them. Then it can repeat, of course, these multiple times and the cloud will be in the situation where he will have a bunch of messages with a bunch of authenticators and then can choose a function and can choose some messages and claim that he computed a function over previously authenticated inputs. Intuitively what we want to achieve is that if the malicious cloud cheats, that is it provides a false result of the function over previously authenticated data, then the signature that it provides to Bob is not going to pass the verification. So now, what happens if one of the users is corrupted? We modeled this eventuality saying that the cloud gets the secret key of the corrupted user and basically what we want is that again if the cloud chooses the function and now chooses messages which belong to not corrupted users that's really important, then it cannot cheat where cheat is make a false result authenticated. So now we can ask ourselves why do we need not to allow a corrupted user to put inputs in the computation? So why don't we consider a forgery the fact that the cloud can compute false results over data that comes from corrupted users? Well, because if the choice of the function is up to the cloud, basically the cloud is able to make Bob accept results that are wrong. And we can realize this with a simple example. Let's take the easiest function that we can imagine like a gate operation and let's imagine that one of the two inputs is provided by the green user which is corrupted. Basically, the cloud is able to authenticate whatever result of the function it wants. So let's move on to the first construction of the multi-keyomomorphic signature that we propose and just to ease the presentation I will consider the simple case of two users. So in our scheme the public parameters are a message space X which is Boolean, a matrix space which is ZN times M, another matrix space that is U where basically U are squared matrices over Z where the infinite norm is less or equal than beta. We have a data set of size T and the identity which are basically two since we are like speaking of a toy example. The key generation consists in providing a lattice trapdoor which will be the secret key and evaluation key which is a matrix taken at random in ZN times M and a verification key which is the same A as the evaluation key along with T elements of the distribution B where basically the intuitive idea is that we have a VI for each element of the data set. For what regards authentication, basically we take a message X and a label L which again is made of two sublabels and we output a user-related matrix U in the distribution U such that it satisfies the equation AID UID plus XG equals V where G is a non-metrics of coefficient zero and one. So basically where do we use the trapdoor? We use the trapdoor to invert V tau minus XG and find U with small coefficients such that the equation is satisfied. What about evaluation? Here we have to split the two cases. The first case is the two signatures in input are from the same user and here basically it's pretty simple. We apply the evaluation procedure of a paper from Garminov by Kuntanathan and Weeks from 2015 and we solve the problem in this way and for the verification what we check is that since the two labels share the same identity, green ID, we check that A green ID plus the green authenticators over F plus XG is equal to a non-evaluation procedure over V tau 1 and V tau 2 that I'm not going to explain and we check if this check is okay. If we have a case where the two signatures in input are from different users, we have first to expand the signatures putting a zero like a fake component over the identity that we don't have. So basically if we are in the case of the green identity we have to put a purple zero like the second component while if we are in the case of purple identity we have to add a green zero component as the first component and then we are going to reasoning component-wise applying again component-wise the evaluation procedure from GVW15. For what regards verification, here since we have two signatures which do not share the identity we have two labels that do not share the identity as well and so what we have to do is basically to sum A green ID times green signature plus A purple ID times purple signature and check that this plus X times G is equal to the evaluation procedure over the V tau I. So for what regards security what we got is that if small integer solution problem is hard then our multi-keyomomorphic signature scheme is weakly adaptive secure. I'm not going to read again the small integer solution since we had a lot of times in the past days and then from this basic result of weak adaptive secure we have some extension. So basically from SIS we go directly to weak adaptive secure for single dataset. Going in the random oracle model we have a variant for unbounded tag space using a standard signature from the weak adaptive for single dataset we can go to the multiple dataset frameworks and then using this monomorphic trumpeter function which were proposed again in GVW-15 we go from multiple dataset in the weak adaptive setting to the adaptive secure framework. So let me recap a bit about our contribution. We proposed the first suitable definition of multi-keyomomorphic authenticators and then the reconstruction of a multi-keyomomorphic signature and MAC. Let me mention some recent work. Basically there are two papers. The first one is a Joffo-momorphic signature which is from Lai et al. and can be found in a print. And then the strong points of this paper are that they propose a stronger security notion and they go beyond the linear dependence on the number of identities of these very strong assumptions. And the other paper which is related to ours is the paper from Derlern's Lamanig also in a print which is entitled Keomomorphic Signatures and Applications which deals with the monomorphic on the key space. We have some open problems that we want to underline. The first problem is go beyond the linear dependence on the number of identities with weaker assumptions and the second one is consider the fact of users colluding with the malicious cloud not for general function because we saw that we cannot achieve this point but with particular classes of functions. That's the talk. Thank you for your attention. Okay, we've got some time for questions. I'll start with a question of my own. So could you maybe give a sort of practical example? Yeah, so basically, okay. Let's assume... Okay. Let's assume that the users are sensors that... Okay. Let's assume that the users are sensors that has to send some data to a remote cloud and the cloud has to compute some function over them and assume that at some point one of the sensors is basically compromised or get broken or whatever. Basically, we do not have to restore the whole system but we just have to replace the sensor and give the new sensor a new triple of keys and the system is going to work. Can you make it fully? Do you have some kind of boot stepping procedure for this? Yeah. Our construction works with leveled fully homomorphic encryption. We did not think about what you are mentioning but I mean basically, we consider it a first step. So it's the first proposal of a definition. It's the first proposal of construction but the hope is that in the future there will be much more construction which deals with much more models than what we do. Any other questions? Can you pull up the algorithms? Yeah. So if I'm, for example, if I'm always collecting data from the same set of sensors there's no way to combine the keys, like the evaluation keys into a single one? Yeah, that's one of the open questions. It's go beyond the linear dependence on the number of keys because so far what we have is that, for example, if you group the sensor by district, for example, and you consider a district or a sensor which sends multiple messages, then the succinctness definition works because you have multiple messages which are authenticated with the same key. But how to combine different keys in order to make the number of keys shorter it's not really clear at the moment. I think it's going to be a future direction of work. All right, well, unless there are other questions, let's thank Luca again. It's there already.