 Hello, hello. Welcome to my talk at BKC 2021. I'll be talking about our paper on insecurity of the Divi-Helmen Oblivious PRF. But there's a security of the same Divi-Helmen Oblivious PRF implemented with multiplicative blinding method. It's a joint work with Kugou Kravchuk and Jia Wuxu, and I'm Stanisław Jarecki. What is Oblivious PRF? The classic definition is that it is a secure computation of the following. This is a PRF and the two parties, client and server. The client has an argument X, server has a key K, client computes PRF K at point X, the server gets nothing and in particular does not learn what arguments the PRF was computed on. So this has found multiple applications. It's a beautiful crypto gadget. As far as Fort Kaliski, Xavier Boyan, and ourselves, use it for something called password hardening. So, an OPRF box can be basically maps low entropy secrets into high entropy one, right? Because S could be a counter and still this is a pseudo random value of the PRF, right? In particular, in the opaque Eurocrypt 2018, we constructed strong asymmetric pack using this box, because it allows to this password hardening allows a password to be translated into the random key. And this pseudo random key can be used in as a secret key for encryption or signature, you know, and you use it in a modular way in any authenticated exchange. Another example, set intersection protocol by Hazai and Lindel, and then many, many others, many variants of this. Two parties can compute set intersection using OPRF. The client computes this on the key set elements, the server Alice computes her set, the client can compute, you can compare these values and this in fear, infer what X and I are intersecting, right? And because of PRF security, this reveals no information about points outside of the intersection. And there is many other applications, basically because PRF is a fundamental tool in crypto, right? It is, this can be a blind encryption, it could be blind decryption, it could be a blind Mac. And because you, for better or worse, we're using publicly cryptography to implement this. And from this particular constructions is not far to public key applications. So what are the applications of this. Well, using classic groups. Here is a simple implementation. If hash is the hash on a group, you, this is a PRF, we call it hash Divi-Helman PRF. It's, right, and you can compute this obliviously in the following way, the client using this argument X, he hashes the group and then exponentially random blinding value. This is a random group element. So therefore this is informationally hiding what X is. The server exponential is just to decay and the client de-blinds and basically removes the blinding factor and gets back hash of X to decay and hashes that, right? It's a very inexpensive protocol. And with this paper with Aguilos Gias, we show that this implements a very strong notion of the universally composable PRF, in particular. I hope your REF has property that it is independent every, a different key input by the server creates a random, another instance of a random function. If there was no outward hash here, that wouldn't be the case, because the two functions for keys for example k and 2k would be obviously correlated, right? These are the values of this function squares of the values of this one. Likewise, entering X hash disambiguates between so that every hash queries the corresponds to a unique argument key pair. And the question is, can this protocol be even faster for standard groups again, right? Because it can. Very old protocol. It can be traced back to a chance blind RSA scheme. So take this exponential blinding and replace it with a more lucrative blinding, okay? So you multiply hash of X with a random group element created this way. So this is the same, she exposes it to the k. And you can deep line it again, but you need this g to the k value to go, right? And you, okay. So it works, the server can cache this public key value. So what did we gain by this? This protocol is computationally less, less expensive because we replaced what was a variable based explanations in the scheme, right? Both this and that are variable basis here. This is definitely a fixed base. In many applications, the public key quote unquote of the server is also a fixed element, think of the set intersection application or authentication to the server using opaque where, you know, you repeatedly authenticate to the same server so you can just as well cash that public key. Okay. It is quite significant speed up. So, is it just as secure as the previous one? It looks innocuous change, right? But here's a subtle vulnerability. The server can, doesn't have to create this output in a way that is intended. So, a convenient way to think of this output is to think of a fixed key K, that is the discrete logarithm between two elements. And once that is fixed, then be without those of generality is the correct explanation of a, but perhaps with the multiplicative factor. And this factor of Delta multiplicative shift comes out in this so. So what, well, one way to look at it is that what crime computed is not the intended PRF, but the modified one, the one whose key is per K delta. Okay. Again, what's the difference? This is PRF. So you will believe you see computed a new PRF should be happy. That's why. The original PRF under the definition I was talking about. It's, it's very strong in the sense for every two keys. These functions are independent in the random or a call under the gap, the GH assumption on the group. These new functions are not so. They can program collisions functions in what sense we're going to call it correlated output. For any key, it can create a new key, such that on the chosen argument X, this equation holds that you can see for any Delta K and impact any K star, and any extra, it's easy to compute this delta star right just divide this part by this. You can make this question hard very easy. What consequences does it have? Well, two functions is two different functions will collide for this programmed X star argument. And as we show in this paper on all others, it will still be independent. Okay, it's not. It's not very clear how exactly to model this what does it mean. So that's partly what we do here. But essentially this is the case. Okay, how bad are these programmed correlations. Here's an application. It actually creates an attack on the new in. So Charlie on Monday. He takes his password and he uses this of your X service to harden his password, right to translate it into a high entropy value, the PRF on on the password and uses the high entropy value to encrypt an authenticated data. Right. Why would he do this, because if I see this encryptions. I cannot decrypt them in offline dictionary attack. These are not possible. They are high entropy values the only way to get them is to online attack the same server and on password guesses right. So that is an online attack now and not an offline and can be timed out. Okay, but if the server is malicious. And the next day Charlie comes along with the server can switch the random function to the one where there was this program programmed collision, which means that what does Charlie do with the recovered the prime. He tries to decrypt this data, it will succeed essentially only if these values are the same. And, and that is it's not the same, the client will surely complain right or retry the protocol and if they are the same and then he will not. He will learn whether the client is happy with the results or unhappy with the results. And this, whether results are correct is correlated with the input Charlie uses being equal to this program to single place at which these two value these two functions have the same output. So this way Charlie learns a server learns Charlie's password. So if you use the exponential blinding method. With currently abstracted as this is a strong use of your left. This is not the case in that case, all a forever choice of the server. This was an independent random functions so the server could either keep the same key, in which case client is happy, or switch, in which the client is for sure not happy, because this functions non collide on any argument, except from the probability of collisions in in random functions. Right. And so the client is a server here doesn't learn anything new. It did learn something new in the mold of your F. Those are not equivalent notions, not the equivalent constructions. How to remove this effect of delta shift. Well, there is many ways. And I don't know, you don't know that proof that this stuff is correct, right. It's not an expensive proof it's just a proof of discrete local to me quality. So it's one or two explanations that party. Well, but the whole point was removed to reduce complexity rather. Right. And it basically defeats the point unless you want verifiable of your rough in which case, you know use that version, because it's cheaper. Now, actually, so if the client could cash the Z. So, he could recognize if you could recognize that this malicious server switched the Z value in this application I was talking about right together with the authenticated encryption he stores the Z that was originally used. And then he noticed that Z was switched, or he doesn't even bother taking it from the server uses the cash value that eliminates the attack for that application. But in more generally basically the really need some authentication so you either have secure storage, or you authenticate the value coming at you in the context of password authentication. These are not good options. So here's another thing. Add Jesus the page is value into the hash. You do get it you need it to compute this the blinding. So why not. This seems like a no cost. And it again fixes this correlation issue. Here is why this is not actually a universal case. Let's compare how more blinded evaluation and expert blinded evaluation would work under these two regimes of the intended functions this way and then from that way. Now why do we need better looking at both protocols isn't this one cheaper we just argued, there is one place it's not. It is a slightly larger bandwidth. Okay, and the security wise okay this is secure whether you have to swear or that way. This one hashing this way fixes it. And here there is some vulnerability, we just discussed. Isn't this a preferable option always will not quite because if you have in this on the hashing, then we will basically cannot run low bandwidth protocol version, because they will need to see. So they either have to store it or they have to send it. No, the version allows for minimum bandwidth implementation, and in particular for iot devices. This seems like an attractive option. So, in this paper we look, is it possible to standardize this is either is there a group of applications where you standardize the function this way. You can switch from either this only implementation if you want low computation, but you're okay with larger bandwidth, or to minimize basing this one, but the either one works. And so, then there is yes there is such a group. There is disclaimers. As you've seen, there for some applications, this will create vulnerability. So here, it's that we model this as a relaxation of this strong Opioff model that is realized by exponential exponentially blinded evaluation method. So the model this is something we call correlated, you see Opioff. And the nature of this model is that a server party can create keys which are correlated, meaning the output collide, but on the only one argument x star pair for a pair of function. And otherwise on all other elements arguments, these two functions act like independent random functions, and we need slightly stronger version of a gap one more DH to argue that this method blind multiplicative blinded evaluation method implements this functionality. So what is correlated Opioff functionality and what kind of applications are okay with it. So, first, let's flash quickly this strong Opioff method model to see what's the nature of the relaxation exactly. In the strong Opioff method realized by the X exponential blinded. The harsh DH. The server. Every time somebody wants to evaluate. So if you're left, basically the service choice, whereas in the real world, the server chooses her keys. It's an abstract world. This is the answer to be equivalent to just pointing to some random function. So the server can continuously use the same random function. Okay, or she can switch. But this will refreshes the system completely independent random function now. So, what is the new model, which this multiplicative blinded method realizes a single application as password authentication. Okay, because I went on one slide chat will both give tell you what the model is, and argue, what kind of applications. The model would be good for. So, think of password authentication where clients password client input is a password, and it remains the same throughout. Okay, for the purpose of this explanation. In Israel, Opioff computation think of this as some initialization of a password authentication scheme. And the zero value the output of the PRF will be because it's a password authentication. It will be used to authenticate to that same server as. Okay, who's running this opioff. Yeah, in that case, well, the next evaluation comes, the server could use the same function that was used in visualization. Okay, without this generality, let's see what happens if the server switches and creates a new function well, because it's a correlated opioff model. What happens is that switch to a new function. In addition, the server can specify a list, including a new one argument at which these two functions will correlate. So, in the abstract world of the UC functionality, this will mean that this is a random function except it will have, it will program to collide with this one on this input x one. The client evaluate the server to maximize their chances will use the following feature of the functionality. They will run a new function and provide a list of arguments on which this new function can be correlated with previous ones. And without the loss of generality, they can be correlated with all previous ones, except only one point with each. This is more DH method. But here it doesn't really matter how it's correlated with what's matter is how it's correlated with the base functions of use in initialization. And for every one of these new functions, the server would put in one value. The model restricts them. You can program some collision, but only for a single argument. Okay, for with a given function, you can correlate it on only one argument. Okay, so that's these new functions will be correlated on these chosen arguments with this base functions they can be so correlated with other functions but that's really immaterial in this application. So, if these vi's that the client retreat are used to authenticate to the same set. And what does server learn well as we were showing in that example earlier with the server learns whether vi is the same as this. Because if this V zero set was used to set up the authentication method, then, you know, in the case the is equal to V zero authentication succeeded in case different. So then the usual fail server will learn that and basically correlations created the avenue that we showed at the very beginning that you can the server learns whether the argument that the client uses is equal to the unique single value that on which the collision with the underlying base function was programmed by the, by the server. So, the server gets a one, a single online test for the argument with each such execution. And that's what we show by this corridor that you see of your models that there's only one otherwise is a random functions. You can create an online password test avenue, however, because it's only one per session. There is a very natural for us, a class of application for which this does not create a new attack avenue for password application protocols where for every online application, the server already could run the base protocol on the guest password. And if the guess is correct authentication would succeed. If the guess is not it would fail the server by definition of the, of what this functionality is has an online password test avenue per interaction. And this is essentially why the proof that is correlated to see up your functionality goes through for in particular for opaque to conclude. And we show that it's not particularly grounded blinded age. That is why it realizes relaxation of this opi Rf method which we call correlated opi Rf. We show that the nature of the correlation opi Rf is it creates online attack avenue but only one testing point per per instance. And in particular, there is a natural class of protocols for which does not make a difference. And we show that this does not make a difference for opaque. And the model factor for reduces in, in, in a number of explanations to now, basically, just two times the base if you haven't. And certain perception probably is there's a secure way. There's could be other applications right but the warning is that you need to verify per application, whether this correlated opi Rf as devices for your for your security or not. Thank you very much.