 Alo Có rồi, Alo 1, 2, 3, 4, Alo, Alo Alo, Alo Alo Alo 1, 2, 3, 4, Alo 1, 2, 3, 4, Alo, Alo 1, 2 Chúng ta chỉ có một bộ phim về bộ phim. Vì vậy, cảm ơn các người đã sử dụng. Tôi có thể nói với các bạn, các bạn đã sử dụng. Chúng ta cũng sẽ tìm tất cả bộ phim đến 1 hôm nay. Vì vậy, nếu bạn muốn tìm bộ phim về bộ phim, hãy tìm bộ phim theo bộ phim bởi 1 hôm nay. Và nếu chúng ta có thể tìm bộ phim bởi 2 hôm nay. Ok, nếu có ai có chuyện về đó, bạn có thể tìm bộ phim của bộ phim. Cảm ơn. Cảm ơn các bạn đã tìm bộ phim theo bộ phim 2 này. Vì vậy, tôi rất thích tình yêu và hạnh phúc để tìm bộ phim của bạn, Dr. Hotec Vì. Dr. Vì đã tìm bộ phim bởi MIT trong 2002, và bộ phim của U.C. Bởi Bộ phim trong 2007. Dr. Vì là một người thông tin bộ phim của E&S Paris. Họ đã làm nhiều hành động tốt. Bởi nhiều tập trung, tập trung hành động, tập trung hành động, tập trung hành động, tập trung hành động, tập trung hành động, tập trung hành động và tập trung hành động. Sự thấy và là vấn đề trên phòng phòng tài giá năng. Vì vậy, nhanh ra đến với tài giá Thu Thè Quy. Cảm ơn vì đã gọi là phần tài giá. Hôm nay mình sẽ đợi tài giá bản thân. Vì vậy, em rất xin lỗi để đến đây. Đó là bởi vì mình đã gặp ở Singapore trong những bộ phim, để về phim ở Asia để giúp thay đổi. So I'll be talking about advances in functional encryption, right? So let me start right away with the talk. So we all know that we now live in this era of big data, where we collect a lot of sensitive data about individuals, collect financial data, medical data, customer data, employee data. And because there's so much of this data, this data is also often stored in the cloud outside of our control. This raises serious consideration about privacy. And for privacy, we actually already have a solution from cryptography, namely the notion of encryption. So what traditional encryption allows you to do is essentially to encrypt the data, which is very much of putting a lock on the data, so that anyone who doesn't have the key will not be able to learn anything about the data. Now the problem now with this is that you withhold, if you put this encrypted data in the cloud and you withhold the key from the cloud, then you lose utility. The cloud will no longer be able to compute on this data. So the question is, can we somehow resolve this tension between utility and privacy in some meaningful way? In particular, we want to find a way to be able to encrypt data while at the same time being able to restrict access to the data and restrict the kind of computation excellent parties can do on the data. Right, let me proceed by example. So the first half of the talk will mostly be on restricting access and then the last maybe one third of the talk will be about restricting computation. So let me start with an example. This is one of my favourite examples. Many of you have probably heard it. Right, so imagine you want to do a dating in this era of the internet and big data. So the way dating works or online dating works is that you have a user who is interested in meeting someone who matches their dating preference. So the way they work is that they will sell on some dating website and when they do it, they will also create a profile. So in this dating profile, they will say put all sorts of sensitive information about themselves, their pictures, their hobbies, their age, etc etc. So that's certainly the issue of privacy. So you really don't want anybody to have access to all of this very personal information. So ideally, you want to be able to publish your profile on the dating website while at the same time limiting access to this profile. What does it mean to limit access to the profile? You really want only the people who satisfy your dating preference to be able to read your profile. So for instance, something, a canonical example maybe is that you only want people who are tall, dark and handsome to be able to see your profile. Right, and well amongst friends here, we know that a far better alternative is to look for people who have a PhD in computer science. Right, so let's even dismiss with this tall, dark and handsome and for the rest of the talk, let's focus on this example of having a PhD in computer science. So this is the stronger notion of attribute-based encryption which was put forth in the work of Sahai and Waters which basically says that when I encrypt a piece of data say a piece of message M, I want to be able to specify a policy on this message say PhD in computer science. The idea being that only people with PhD in computer science will be able to read this message M and everyone else should not be able to read it. So now you have these users on the system, they are going to have some qualification for attributes. So for instance, you have individuals with a PhD in computer science, some maybe not a PhD just a master's and then there are people with PhD in other fields. So the way like the dating website will continue with the previous example is that when users register on the website, they get a key from the dating website. So this key, especially if you look at the picture you see that the keys look a bit different. The keys are specially customized to correspond to the kind of attributes that they have. So you have a key for someone with PhD in computer science and then a key for masters in computer science and so on and so forth. And the basic correctness requirement is that if somebody satisfies the policy that's on the message, then they should be able to equip the message and find out what the message M is and in this case find truth upon the internet. And otherwise, for anyone else who doesn't satisfy this access policy, with the key that they have, they should learn nothing about the message. So if they put the key together with the cybertext, they should see nothing. Moreover, we actually want a stronger notion of security which we want to protect against collusion. So we don't want a bunch of, I guess, unemployed people maybe, and on sitting after drinks on a Friday night sort of looking at the keys together and suddenly by missing and matching the keys in some way they can decrypt profiles that they would otherwise individually not be able to decrypt. So this is a very useful and strong security requirement and this is partly what makes attribute-based encryption so difficult to build. All right, so let's try to build such an attribute-based encryption scheme that supports this policy. Let's do a small warm-up. So here's what the scheme is going to look like. The way I describe it is actually going to be a symmetric-key encryption scheme, more of a warm-up, but you can turn it into a public-key encryption scheme. So let me tell you what the keys look like. So in the system, we're going to create all these random strings. There will be a random string for computer science, random string for PhD, random string for master's and random string for biology. And when the user signs onto the website, they're going to get the strings corresponding to all the attributes that they have. So the one with the PhD computer science will get the random string for PhD and the random string for computer science and so on and so forth. So how encryption work? Right, basically the encryption works by basically taking the, if you want to say only people with PhD computer science will be able to decrypt your message, then you take your message, you are one time paired it with the random string for computer science and one time paired it with the random string for PhD. So again, this is actually a symmetric encryption scheme. So let's check that this scheme satisfies correctness. Indeed, if you have the, some of the PhD computer science can easily undo the one-time-pads and recover the message M. And for each of these two other individuals in the system, they're going to miss one of these random strings and basically its security follow some security of the one-time-pads. Now let's see what happens when there's a collusion. So let's see these two people come together. Now what happens? Well, it's not hard to see that basically now suddenly they will be able to decrypt the message even though each of them individually cannot decrypt the message. In particular, the reason for this is a class of what we call mismatch attacks which sort of makes, is exactly what Miss Collusion is so hard to deal with. What this mismatch attack does is that it turns out you can actually take two keys that are not authorized to decrypt the message. Somehow mix them in some way and combine them in a different way to create some other matching key that can actually decrypt the message. So in this case, the attacker is essentially taking the first half of the key for commitment science from the one of the masters and then the part of the key corresponding to PhD combining these two together, mix and match them and creating a key for a different set of attributes that can actually qualify to decrypt the message. Right, so this means that this scheme is insecure against collusions. In general, any scheme that's susceptible to a mismatch attack is going to be insecure against collusions. Alright, so let me tell you how we solved this problem, how we defeat this mismatch attacks to construct the first attribute-based encryption scheme for a very large class of functions. In fact, we can actually get attribute-based encryption schemes that supports all efficiently computed policies, namely policies that are computed by all circuits. So this joint work with Sergei, Govindov and Vidov by Kuntanathan and the key idea in this work is very simple. Instead of working with strings, we want to work with functions. The main distinction between functions and strings is that in some sense, a string is one use. If you have a string, you can either give it out or withhold it, so if you only use it once, whereas a function is many use. Think of a function, a function is essentially a collection of an exponential number of strings. Whenever you evaluate a function at a new point, you get a new string. The way to think about this function you should think of it as like an AES function with a fresh AES key. So when evaluated at different points, you get a number of independent strings. So this gives you, so a function is in that sense many use, rather unlike a string which is only one use. Now let's see how the scheme is going to work. It's going to be, right. So basically like I said, we're going to replace a string with functions. So instead of having a string for computer science, a string for PhD, a string for biology and so on and so forth, we're going to have a function corresponding to a computer science, a function corresponding to PhD. The way to think about having a function you should think of it as having a key for a function. So now how is the key for each of these individuals going to work? Right, so whenever you create a key for the generate key for the individual, you're going to take this function and evaluate it at some random point. Where the random point is chosen at random every time you generate a new key. So the first individual on the right will get both of these functions evaluated at the same random point as. The next one will get both of these functions evaluated at some new random point T and the third one will get the irresistible random point U. Right, unfortunately encryption is going to be a little bit more delicate. It won't be exactly like a one-time patch from before, but you just have to trust me that you can design the cyber patch in a special way so that you still have correctness. The reason why correctness is a bit harder here is that we want correctness to hold as long as the individual on the right gets the function evaluated at the same point. So if they're both evaluated at the same point S, you can decrypt. If they're evaluated at the same point S prime, you can also decrypt. And when you're doing your encryption, you actually don't know what this point S and S prime is going to be. So you have to work a lot harder to get correctness that works for all S and S prime. But trust me, this can be done. Alright, so it turns out this construction you can see in some sense is no longer susceptible to the mismatch effect that we saw from before. Particularly if you try to do what we did earlier, namely take the first half of the key from the second individual and the second half of the key from the third individual, you will see that you actually have functions evaluated at two different points. And we can show, we will design the scheme in such a way that if you have the function evaluated at different points and it's actually different independent points, then they're going to be in some sense useless. Right, so this sort of shows that mismatch effect doesn't work anymore. But mismatch effect are only an example of one class of collision attacks that could potentially be other attacks. So what we formally show is that we have a proof of security showing that this scheme is in this case against collisions. Robert, the nice thing about the scheme, the way I describe it, I describe it for a single end gate. What's very nice is that the way this scheme works is that it actually composes well. So think of from your Commitize 101 class, you know that you can take any circuit and represent it as a bunch of gates. And right, so what will happen is that if you want to generate this scheme to any arbitrary circuit, you will basically look at the circuit represented as a bunch of gates. And for each of the wires in the circuits, you're going to pick a new random function and then we will construct this function in a way that they glue well together and they can go from one gate to the other and connect everything together. So let me say that prior to this work, we only knew how to construct attributes and encryption schemes where the policy comes from the class of NC1 circuits. Think of this as very simple functions. Either you can think of this either Boolean formula or a small depth circuits or circuits of the library limit that. Right, the security of a scheme relies on the learning of errors assumptions, so these lattice assumptions, which basically says that essentially solving a random linear system with noise is hard. Okay, I want to give you a sense of why working with lattices is so beneficial, why lattices are so powerful. So prior to this work, we only knew how to achieve NC1 circuits and most of these constructions are based on my bilinear maps. I want to give you a sense of what's so different about lattices that has a sort of structure that bilinear maps don't seem to have. To give you a sense of why lattices are so useful, let me give you a sense of what this function is going to be. So the function is going to be described. So remember you have a collection of functions. The function is going to be specified by matrix A. Okay, think of this as a Y and shock matrix. The input of the function is going to be a vector. Think of all this as sort of matrices and vectors of entries over ZQ for some small number Q. Right, so the output of the function on input A, sorry, the output of the function A on input P is a vector U such that A times U equals P. Turns out we actually want this function to be hard to compute, otherwise the adversary can forge keys. And in general, you give an A and P to find U such that A equals P is going to be easy via Gaussian elimination. So the way we make it hard is to require that you need to fire U with small entries. In fact, the function is going to be a randomized function and it's going to pick a random, it's going to output a random U such that A times U equals P. So this is sort of the function that we're going to work with. This function has already been considered before in the literature. The main twist that we made in this work is to look at this function and realize that one thing you can do is that this function stands, you can think of the input instead of being a single vector P, you can think of the input as being a matrix. So you can stretch this vector P to form a matrix P, to stretch this vector from P to form a matrix P and the output of the function is going to be now instead of being a single vector is going to be a matrix U. So what you want is a matrix U with a game of small entries such that A times U equals P. So what did we gain by stretching this vector P to form a matrix U? What we gained here is that A and P now have the same dimensions and they're essentially the same object. They're going to be matrices of the same dimensions over some underlying modulus, right? So now you can actually treat this matrix P as a new matrix that describes a new function. So instead of calling this matrix P I'm going to call it A2 and now I can consider a new function A2 that takes as inputs are A3 and outputs some U prime such that A2 times U prime equals to A3. Now A2 and A3 again has the same dimensions can go on and on and on and this sort of the fact that you can sort of move this matrix on the right to a matrix on the left is essentially what allows us to glue all these functions together. In particular, this is a structure that we don't have in by linear groups. Roughly speaking, what happens is that once you do a single computation you're going to end up in different groups and you will have to use your pairing and once you use a pairing you won't be able to continue the computation, right? Okay, so this sort of gives you a sense of what our basic attribute encryption scheme look like. Now I want to go back to our earlier example that I gave and revisit this example. So we say that using attribute based encryption we can protect the privacy of our profile but the problem here is that in fact the way attribute based encryption scheme works is that they actually don't provide any privacy for the policy. In particular, your dating preference is always going to be public and there are many settings where actually your dating preference could actually already by itself review a lot of sensitive and personal information about yourself. The fact that you may be gay or that you like kosher food or celebrate Russia Shana or this could say something about your background. So ideally we really want a system that can provide privacy for your dating preference as well. In particular, we want a stronger notion and this is going to be an object that we call practical encryption that guarantees that for a collection of users who are not able to access your profile this set of users will also not be able to learn anything about your dating preference other than that of course they don't satisfy your dating preference. So this is a stronger notion of security that we want to aim for and in the work again with my co-authors so again we know we show that you can realize this stronger security notion may be the practical encryption again for the class of all circuits and again for the same assumption as before. So from standard assumption about hardness of problems in lattices. I want to give you a sense of how this construction goes. So we have a profile and we have some access policy C and our goal is to hide this policy C. So the first idea is wow note that we want to hide this policy C but at the same time in some cases we need to know the policy C we need to know some information about the policy C because we need to be able to check whether we satisfy the policy. So we still need to be able to compute on this policy. So if we want to hide something while being able to compute on state what can we do? Well the first natural idea is to use a fully homo-mopic encryption scheme. In fact this idea already came up in a couple of prior works on related topics and so what we want to do is we will encrypt the policy the circuit C that we want to hide with let's say a symmetric key efficiency scheme. So there's some key K that we encrypt and then this is what you do during encryption and in addition you will in something have to provide the key K otherwise the other party will learn nothing whatsoever about the circuit C. So what did we gain from doing this layer of efficiency encryption? What we gain is that while we have this key K and the efficiency cybertext the efficiency cybertext we no longer have to protect because it's protected by the security of the FHG scheme. So instead of having now to protect arbitrary circuit C we only need to protect a key K. So did we really gain anything we still have to protect something and turns out we do because when we want to protect a circuit C well actually we have to be ready to protect any arbitrary computation corresponding to circuit C. Whereas when we come to protect the key K turns out we only use the key K to do a very specific computation namely to do FHG decryption. So now instead of having to protect arbitrary computation we only need to protect a very specific computation and turn out that in existing FHG schemes the computation that's done on the secret key K is extremely simple function essentially corresponds to just computing in a product and constructing a scheme that protects this very simple function sound to be much easier and this basically solve our problem of trying to protect the circuit C. Alright so that's sort of probably one of the more technical construction in this talk. So far I've taught you how to build expressive attributes encryption schemes and practical encryption schemes from Lattice but expressive I mean we can support very complex policies in particular we can support policies corresponding to all circuits. In the next part of the talk I want to also give you a sense of some of the constructions that are based on our bilinear maps. So the schemes that we get are not going to be as expressive as the one we get from Lattice alluded to earlier these schemes will not be able to support the class of all circuits rather they can only support classes of smaller circuits. But what's nice about these schemes is they're going to be more efficient and they're actually going to satisfy stronger notion of security while being reasonably efficient. In particular from bilinear maps with constructions of efficient attributes encryption under for these shadow circuits under standard assumptions like EDH on both sides of the pairing group and the schemes satisfy very nice properties that are fairly efficient they satisfy a very strong notion of adaptive security. What adaptive security means is that the adversary is allowed to pick the collusions and the policy arbitrarily and adaptive fashion. However there is a price that we pay to this efficiency and strong adaptive security namely that these schemes often come with extremely complex security proofs and that is the reason why this proof has to be so complex because attribute based encryption schemes are a very complex object and they provide very strong security guarantees. In particular they are public key primitives they provide a strong notion of many time security where many time corresponds to the fact that adversary can get many secret keys. Right so many secret keys correspond to many time and in this particular case like I said they satisfy a strong notion of adaptive security. So for the next couple of minutes I want to tell you a series of results that shows essentially shows how to sort of get this very powerful attribute based encryption scheme and very efficient ones without paying the price of this complex security proof. In particular what this series of work show is that instead of worrying about this extremely complex attribute based encryption scheme let's start by focusing on a very simple object. I think of this as this is going to object that's in fact going to be information theoretic. It's basically going to be this object is going to be very much like an attribute based encryption scheme except all of the difficulties are torn down. So instead of being a public key primitive this is going to be a private key primitive so the adversary doesn't get to see some public key. Instead of requiring many time security this primitive only needs to satisfy one time security and instead of requiring adaptive security this primitive only needs to satisfy non adaptive security. So what this series of work show is that they provide a compiler that goes from this much simpler primitive that's much easier to construct and produce the attribute with encryption scheme. So this come at a price of cost it requires assume some combination assumption namely only works with our bilinear groups but understand the assumption bilinear groups. So in particular what the outcome of this now works is that you get simpler proofs you get unified proof of security and you get improvements to provide your works in terms of our concrete efficiency. So I'm not going to be I'm not going to have the time to really describe how this entire result. Still I want to give you a sense of how this compiler works and I'm going to focus on compiling sort of focusing on property one and then I want to give you a compiler that goes from a private key primitive to a public key primitive. Right. So thing of your favorite so I'm going to be a bit big about what this private key primitive is going to be. You can think of it as a Mac or a or a symmetry key a private key encryption scheme. So what the compiler do is it's going to start with a scheme that's a private key scheme and it's going to produce a public key scheme and security of the public key scheme will essentially come from security of the private key scheme. So we start with a private key scheme and think of this private key scheme generating some private key. So the thing of the private key is just a collection of scalars in ZQ. So what would this new scheme look like? I'm going to tell you what the public key is going to look like and give you a sense of how you can relate the security of this public key scheme to the security of the private key scheme. In the new scheme I'm going to tell you how to sample the public key and the sampling algorithm most as follows instead of sampling a bunch of scalars I'm going to sample a bunch of vectors. The dimension of the vectors roughly depends on the assumptions they're going to work with. But the height of these vectors are essentially going to be a constant if you're working with DDH. So what is the public key going to look like? In the public key, the public key will contain a matrix A. Think of the matrix A as specifying your underlying assumptions. So this A maybe specifies a DDH assumption. In general, it specifies a subquad assumption that you want to work with. So the public key is going to contain A together with A times WI for each of these vectors W. And in fact, the new scheme is going to work under the DDH assumption. So all of these values that you publish will in fact be in the exponents of some cyclic groups set. So the main observation is the first observation I want to see is that this vector WI actually has entropy given a public key. And that's because A times WI is compressing the length of the vector A times WI is shorter than the vector WI. So you can imagine that WI still has entropy. And this entropy is what we are going to use in our security reduction. So in fact, so formally, what is this entropy? If you pick a random vector C, with high properties random vector C is not going to line the span of A. And this will mean that it's easy to see that given A times WI, C times WI is uniformly random. And this will be your hidden entropy. So your security reduction, what you're going to do is your security reduction needs to rely on the security of the underlying private key primitives. So it's going to need to create a key for the underlying private key primitives. And the key for this underlying primitive is going to be this hidden entropy. So you're going to create a private key scheme where the secret key, the scalar WI is going to be the vector C times the vector WI. More generally, the way I describe this compiler works well for certain applications that sort of maybe, sort of like a warm-up maybe if you don't need the full-fledged power of bilinear maps. But in bilinear map schemes, the compiler is going to be more complicated because roughly speaking it's because you need to reveal something about this WI on both sides of the pairing group. So then in this case, instead of picking a vector, we're going to pick a matrix. All right? A square matrix, say. And then in the public key, we're going to publish A and AWI. In addition, we're going to pick some other matrix B which corresponds to the subgroup assumption on the other side of the pairing. We will publish this matrix B and in addition, publishWI comes B. Again as before, we'll be able to show that WI again still has entropy given the public key. Now in this setting, the proof is going to be a bit more delicate because the output of the function is no longer compressing. But basically a variant of the proof from before works. The key observation here is that if you pick a random vector C and a random vector D, that with high property C is going to lie outside the row span of A and D is going to lie outside the column span of B. And you can then show that given A times WI and B times WI, C times WI times D is going to be uniformly random. And then this will be the heater entropy that you use to create the key for the original private key scheme in your security reduction. So I have sort of outlined this compiler. This was roughly speaking, this is the compiler that was used to simplify the construction of attributes and crypt schemes. And this compiler turned out to have and the more general concept of compiling a private key primitive to a public key primitive has a bunch of other applications. So in a series of works, we show that using this sort of a compiler technique, you can turn a pseudo random function into an identity-based and crypt scheme. If the pseudo random function has a very good algebraic structure, in particular, if we now start with say a tightly secure PRF, we end up with a tightly secure encryption scheme. And that's how we construct the first tightly secure IB and the standard assumptions. Alright, I want to say that you will see another of these results later at this conference on Wednesday. And also using the techniques that were developed in this series of works, we also showed that you can use these techniques to solve a problem that's not related to attribute encryption, identity-based encryption. In the techniques earlier this year, we were able to construct the first CCS secure encryption scheme with a tight security reduction and it doesn't rely on parings. Right, so this compiler can also be applied to other primitives. For instance, you can start with a... You can, for instance, use it to get non-interactive zero-knowledge protocols. You can compile a symmetric key, a private designated verifier, a non-interactive zero-knowledge protocol to a publicly verifiable non-interactive zero-knowledge protocol. Alright, this is... We also showed that using this paradigm, you can also get simpler and more efficient constructional structure-preserving features. So here, you can basically start by just constructing a structure-preserving MAC and our compiler will turn this MAC into a signature scheme. Alright, so that's roughly most of the technical part of the talk. And so far I've taught you about also the review of the state of the art for a attributes encryption scheme from parings and from lattices. Now let me talk about the bigger problem of function encryption. So in function encryption, you have a piece of data, D. Again, as before, you have keys corresponding to functions. So for function f1, you have a key for f1, for function f2, you have a key for f2. And the basic correctness requirement is that if you have a key for f and you decrypt an encryption of D, you're going to compute f1 of D. You have a key for f2, you compute f2 of D, and so on and so forth. Similarly, like before, we have a strong collusion requirement, namely that if you have the key for f2 and f3, you should learn nothing more than the union of what each of the individual keys type u. So you should learn nothing more than f2 of D and f3 of D. So the big holy grail in the study of function encryption is whether you can construct function encryption schemes for all functions. But all functions are in all-circuits. The way we now have attribute-based encryption scheme for all-circuits and predicate encryption scheme for all-circuits. So I want to quickly review the state of the art. So now, surprisingly, we have affirmative answer to this question if we are willing to use modular maps or obfuscation. But unfortunately, the status of modular maps and obfuscation are a little bit unclear. We don't really have very strong candidates. So we really want to focus on construction that are based on standard assumptions for whichever better understanding. And for result of this kind, much less is known. For instance, we do have function encryption for understand assumptions, but only for extremely simple functions. Essentially functions that are related to computing and inner products. And here we have construction based on both bilinear groups and analysis. If we are willing to relax the security guarantee to bow the collusion, what this means is that instead of the adversary being allowed to get a, excuse me, a valid number of secret keys, we have a system. We consider APRO rebound on the number of secret keys that the adversary can get. Then we can construct a scheme that is secure as long as the adversary doesn't get more keys than what this bound is. The catch here is that the complexity of the scheme, the size of the public key, the size of the cyclotent and so on and so forth are going to grow with the size of this bound. And for this, for this, but in this bound to collusion setting we can indicate function and encryption for all functions. We can get it essentially under very standard assumption. Public key encryption plus slightly more structured public key encryption type assumption. So basically we can get this from harness of factor in, DDH, LWE and so on and so forth. Now if we want to really focus on the setting of unbound to collusions and sort of go beyond a very simple function, then basically the state of the understander assumptions is the result that I told you about earlier, which is the practical encryption for circuits. So you can think of practical encryption of circuits the way I describe it has to do with a message, but you can also think of it as a weak kind of function and encryption, one side of the notion of function and encryption. What this means roughly speaking is that think about, for instance, function and encryption where your functions are Boolean so it only outputs one bit. A practical encryption scheme for this Boolean function basically is one that says that if you only get keys that for wish of function it relates to false you learn nothing about the data D, but if you get keys for wish of function it relates to true you may with no longer provide any guarantee for the privacy of the data D. So you get this one side of the notion. You only get security of the data D if the function it relates to false. And no security requirement, no security guarantee whatsoever when the function it relates to true. So let me now conclude some open problems. So I want to highlight there are many, many open problems in the study of functional encryption as you will with encryption and practical encryption. I want to focus on two particular problems that came up in the context of this talk. The first of course is to try to get functional encryption from standard assumptions and here the most promising is of course lattices because even for predicate and attributes encryption we only know how to get construction from lattices. The main obstacle here is that we don't have a very good understand so to construct functional encryption schemes we need to have techniques to satisfy a very strong security notion namely what's called predicate strong attribute hiding. So in the language of predicate encryption where I talked about encrypting us messages to a policy you need to provide some very strong privacy guarantee for this policy. And in this case we still don't have a very good understanding of strong attribute hiding. There are basically only a very small number of papers that achieve full strong attribute hiding and most of these papers are written by very small group of people. So even in the, so we know techniques for strong attribute hiding in the pairing base setting and we basically don't have any techniques for strong attribute hiding in the lattice setting. So even getting strong attribute hiding for under lattice type assumptions for extremely simple functions slightly more than in a product is a big obstacle already. Right, so this is sort of the the big open problem out there. At the other extreme of things if we think about sort of a construction based on bilinear groups one of my favorite question is whether you can get IBE schemes that are extremely compact. So what I call LAPRTY IBE. So we want IBEs so now right now basically the state of the art for IBE schemes under standard assumptions that are fully secure basically says that you basically need four elements in the cyber tech and four elements in secret key. The question is whether we can do better. And like I said earlier sort of in this compiler results going from private key to public key primitives and it advances in this problem is likely to have very strong applications. It will give you shorter signatures and also applications to non-interpreters real knowledge and possibly structure bridge of the signatures. If we're swall noting here that the best state of the art pairing based signatures so signatures from bilinear groups that I know of under standard assumptions are actually based on the ones that you get from the IBE scheme. So there are many other open problems and I'm happy to tell you more about the offline. I also want to take this opportunity to share some of my thoughts on the study of function encryption actually base encryption because this sort of study more personal and maybe study more controversial and we're very happy to talk more. The first is that I want to encourage everyone working on this topic to always think very critically about the theoretical and practical context of your work. Case study adaptive security. So as I said earlier we have a very efficient adaptive secure scheme from bilinear maps and the adaptive security is extremely well motivated in practice actually. It captures the clouds of very realistic attacks. In the real world you really expect that your adversary can be adaptive choices rather than having to make some not adaptive choices. And so it's a well motivated in practice and in fact it was also partly the study of looking for techniques theoretical techniques to handle adaptive security that motivated the work on dual system encryption which is this very powerful proof methodology that sort of underlines this compilers from private key primitives to private key primitives. So in some sense the study of adaptive security which is motivated by practice has had very rich implications for theory. So taking this forward what should we do next? So let's say we go to a practitioner and the practitioner ask us to recommend a scheme that they can use and actually we want to sort of recommend a scheme that adaptive security secure. The question then is do we really want to do we should we be recommending the state of the art adaptive security scheme? Right Which are like I said fairly efficient but sort of efficient in a study theoretical sense. In particular turns out that if you are willing to in particular if you are willing to settle for non adaptive security you can usually get slightly better efficiency. The better is constant factors but these constant factors do matter a lot in practice. So what this roughly means is that you're going to go to a practitioner and ask do I want do you want to pay a factor 2 for this adaptive security that is proven under some set of assumptions. And as it turns out most of this non adaptive secure schemes are actually not so bad. They are more efficient and they are actually typically also already adaptive secure in the generative model. So if you ask the practitioner then you will say wow you come back to you and say I really want to use a more efficient scheme that I care about this factor of 2 and is there really anything wrong with using a scheme that's only proven secure in the generative model. Do you expect to have any attacks? And the truth is it's probably quite unlikely to have an attack. In fact like Nadia pointed out yesterday if we really want to attack a scheme there are far better way to compromise the scheme than to look for adaptive attacks that are not captured by the generative model. So this means that we should not over we need to be a bit careful about when we motivate our research on adaptive security to make sure that we sort of draw the line somewhere between the practical motivation and the theoretical motivation. It's important that when we sort of evaluate research on adaptive security that we really think about solutions that also bring along new theoretical techniques. Because there's a limit really to how relevant they are in practice. Alright. The other one to say is to stay away by this I mean that right so the right now the literature on attribute-based and practical encryption for encryption has become extremely rich. We have very powerful proof techniques and a price we have to pay with powerful proof techniques is that the proof has become very complex. In spite of this complexity each everyone when working on this topic to make sure to try to make your proof as simple as possible to constantly revise your proof to simplify them and try to write them Even though the community may incentivize the use of obfuscation no pun intended we should still stay away from using obfuscation Right. Finally I hope we bring this message to all of our co-authors and the students we work with. Right. The trouble of working with students not the trouble of working with students when you work with students one thing you have to deal with when you try to tell them to rewrite the proof and over and over again they're going to tell you I don't really want to do this you know they'll look at all these other badly written papers out there they still get accepted why do I want to go through the trouble of rewriting my paper over and over again At this point you tell them what Michelle said which is that you know when they go low we go high Right. So this is sort of the concluding thoughts that I have on research and the I want to really conclude by a series of acknowledgment of the work that I've presented today are mostly with my co-authors and I've been very very lucky to work with some very talented individuals and who keep me on you know going high and I want to take this time to sort of acknowledge some of my co-authors who has really show me not just shared with me the technical knowledge but also the kind of passion that they have for research Jason is one of my first PhD students not officially I co-advice him unofficially and he showed me he showed me that sleeping is only necessary as a step towards doing more research you only sleep so that you can do more research So he was going to go to sleep and all he wanted to do is to be able to get ready to do more research after that In fact this is an email sent just before midnight and at 4 am he said Well, I have a new idea so I guess it works I've also been working with ICA ICA took this step further Not only is sleep sort of only a theory for research ICA showed that there is something a little bit more than that So working on a paper for the UOQ deadline a couple of weeks before the deadline he told me I'm expecting a new baby So at some point like one week before the deadline expect me to disappear and you'll have to work on this paper And the paper turned out like a week before the deadline was actually already in pretty good shape I kept sending him email he kept replying so I was a bit worried I wasn't sure when the baby is going to arrive and at some point he said three days before the deadline he wrote an email saying the baby is shit So I was like okay great the paper is in very good shape I just have to make a couple more tricks and then we're ready to go Only to discover that something like 12 hours later he said I'm going over the paper now So now we have people picking research over sleep people picking research over babies The one to sort of really steal the show is my long term collaborator Abhidhok Baikun Thanh Thanh So earlier this year we were all at PKC in Taiwan And while he was there he went to a hotpot for the first time So hotpot if you don't know you can get it in in Vietnam by the way It's so known as Shabu Shabu in the Japanese context For those who know what Fondue is think of it as fondue for Asians with meat So he went to this hotpot place At 11.30 I got an email from him saying am I supposed to cope everything here So by the way of course you're supposed to cope everything And then a couple of hours later I got an email from him saying he wants to continue meeting with the research And we need to do this we finish this research really soon because you know he's been eating raw meat and he's worried that he's going to die from poison So once again let me thank all my co-authors and also my colleagues my family and my friends and of course thank all of you for sitting through this talk Time for questions and comments Do you have any questions and comments Thank you very much Thank you