 dylau'r gyntaf cwnton o feiaf o'ch teimlo i'r cyfrifiadau o gyntaf sy'n ceisio gafi ddau. Yn amserThis is jointwork ar Martin Albert, Ameddio ac Nigro Smart. Firstly my protocols are the first to be constructed from lattice-based hardness assumptions, specifially the ring learning with errors, short-intagive solution problems. Unwys for our construction holds in the quantum random-aracle model. ond oedd ychwanegwch yn sgwrdd a'r jugerfyniad tan yfynigol gyda iawn i ddillfyniad heterotafol amlaethu'r achesb gracious yma. Rydym yn fawr dyfod bod ymlaen deserbyddau yma, byddai'r cyffredin iawn i ddillfyniad yma i byddai'r gwρίadau yn y ffaith yn y野fyniaeth, a fydd yn cael ei wneud a chael ei ddillwyd i'r cyffredin iawn. Felly, ydych chi'n fany o ddillwyd i'r cyffredin iawn i ddillfyniad yr angen ôl a llwydd a llwydd o'r ddillfyniad yr unig yn ymddangos yw'r prysgol. Felly y FPOF cwm ydych chi'n fath o'r prysgol yw'r prysgol yng nghymru a llai, ac mae'r llai gynllun ymddangos o'r x yw ddyn nhw'n osbryd ac ymddangos ymddangos o'r ddyn nhw'n ddyn nhw'n ddyn nhw, ac mae'r llai gynllun wedi'i gynllun iddo i'r prysgol yw'r ysgol ac mae'r ddyn nhw ymdangos o ddyn nhw ymddangos o'r ddyn nhw on its input and the service key. The security properties of the protocol state that the server should learn nothing about the client's input x and that the client should learn nothing about the server's input key k. In order to realise verifiability, the server must also prove that the output was evaluated using its key k. So how do these things work generically? So in order to get started, the server first sends in an offline phase a commitment c to its key k, which the client then registers. So in the online phase, when the client then has the commitment c, which the server has sent and its own input x, which it wants to query, it first encodes its input x to some voprf friendly input distribution. So this gives this big x and then runs its generic blinding algorithm to receive y. And this what this does is hide x. And then using this y, the client sends this in a message to the server. The server evaluates whatever it needs to do in order to evaluate the voprf on this blinded input y and then it produces this output z. In order to realise verifiability, it also produces this proof pi, which essentially shows that z and the commitment c share a common key k, which is the server input. When the client receives this message back from the server, it first verifies the proof and then unblind z by running this generic blinding algorithm and just simply outputs wherever that algorithm outputs. So generically speaking, we have two guarantees that we require the voprf to hold. So firstly, the unblind operation on this z should give an output, which is equal to the PRF evaluation on the service key k and the encoded x, which the client provides. And then we also obviously require a security guarantee, which is that for any malicious client or server, there's a polynomial time simulator that can simulate the real world to these adversaries, just given access to an ideal voprf functionality. So what applications do we actually have for voprf in the real world? So the internet engineering task force is currently in the process of standardising a number of different functionalities in the internet setting that use these protocols as building blocks. So the first of these applications is the privacy class protocol, which is a private authorisation protocol in which the client receives authorization from the server for some event. In order to do so, it first sends some blinded token to the server and when the server wants to authorise the client, it signs that token and returns it to the client. What the client then does is output an unblinded version of this token that this exchange is actually a voprf exchange. So the client and the server compute this voprf protocol and the client outputs the output of the voprf and then in the future when the client wants to be authorised, it sends this unblinded token and the server, because it has never witnessed this unblinded token, cannot link it back to a previous authorisation event but it can still verify that the client is authorised. So the privacy class protocol is used widely across the internet by many different companies and organisations. Secondly, another application which is also undergoing standardisation is the opaque password authenticated key exchange protocol, which essentially combines an oblivious pseudorandom function protocol with an authenticated key exchange to build a secure opaque and these pakes are seeing widespread adoption in the internet setting for allowing mutual client server authentication using usernames and passwords that are secure against standard pre-computation attacks. So before we get into our construction, I'd just like to highlight exactly how we build these things in the classical setting because that's how they are used currently. So both privacy class and opaque use this verifiable oblivious pseudorandom function of Yarraki, Kyaius and Craftshik. So the first phase of this voprf sees the server commit to its key K by raising some common group element that's agreed by the client server to its key K and sending this value C to the client. So once the client has the service commitment C and its own input X, it first hashes its X to the group using this random deterministic mapping. This mapping outputs a group element X which the client can then multiply with this blinding factor and this blinding factor is constructed using the original generator G and this randomly sampled R to create Y, it then sends Y to the server as in the generic case and the server in order to compute the PRF simply raises this Y to the power of its key K and then proves in zero knowledge that Z, the output, shares the same discrete log as the original commitment from the offline phase and they're approved for doing this in a snore setting. Once the server constructs those two elements it sends them back to the client and the client can verify the proof and then unblinds the output by simply multiplying Z by the commitment C that the server sent raised to the power of the inverted R that it sampled in the previous phase and this output as shown in the argument there gives a correct pseudo random function output where the pseudo random function is simply X raised to the power of the service key K. So the security of the protocol rests upon the fact that the service key K is in the discrete log of this client chosen value and therefore the client can't learn what that value is by the hardness of discrete log or discrete log type assumptions but clearly this protocol is not secure against post quantum adversaries given the fact that we use these classical based assumptions and so in now in this work I'm going to highlight some of the problem exact problems in these protocols and how we might then go about solving them in the post quantum setting. So firstly I should just note that no previous construction has been based on lattice based primitives for realising a secure protocol however concurrently Bonae et al Asia Crip 2020 presented a post quantum vff using isogenes over supersingular elliptic curves constructed using hardness problems which are thought to be hard in the post quantum setting but in this work we're going to be focusing on the lattice based foundations for this protocol and first I'd just like to note the similarities between sort of discrete log type assumptions and those in the learning with errors sphere of foundational assumptions specifically in the ring setting so firstly for the discrete log hardness problem we have this common generator g and then raised to the power k it's hard for the adversary to learn k likewise in the ring lwe setting we have this secret element s sample from the ring and we also have this error term which is sample from a short ever distribution in that ring and then given some public ring value a computing a s plus e gives a randomly distributed element in that ring even for an adversary that knows a so using this similar type assumption starting point for our work was actually can we create a natural post quantum vff analog using these foundations so what might this look like so firstly i'm going to walk through this abstract attempt to construct a protocol which has some issues and then i'm going to highlight how we then solve these issues in order to get to our final construction so in the commitment phase we can replace the discrete log commitment which we had in the previous protocol with just the natural ring lwe analog so we have this ring element a and then we construct c as a k plus e and the client can't learn anything about the service key k based on the hardness of ring lwe i should note here that the different colours in this diagram correspond to the different distributions which are highlighted in the top right so then in the client's first message in the online phrase once it has the service commitment c and also its own input x it first has to do this encoding mechanism to change this bit string x into some ring value ax firstly assuming that this is possible the client then constructs its message cx to the server in the following way so it adds ax to essentially a newly sampled element and in doing so then cx is then randomly distributed in the ring and the server can't learn anything about this ax is essentially the message that the client is trying to blind in this in this setting so once the server has cx it then has to compute dx and it does so again with the very natural ring lwe replacement so it simply multiplies cx by the key k and samples a new error term e2 and then proves in zero knowledge that dx and the original commitment c share this same ring lwe secret k which is the service key finally i'm ignoring now the verification steps because the client must also always do this but essentially the client verifies the proof and then computes the output of the pseudonial function in the following way so it outputs this yx which is equal to the rounded the rounded ring element that resorts from rounding dx minus cs so we can think of this as the same way as the unblinding step in the classical construction therefore correctness holds when k and s are sampled as short elements in the ring just like the error terms e1 and e2 and what you should note here is that this p that's highlighted in the rounding step is chosen sufficiently small corresponding to the original ring lwe modulus q such that this rounding occurs correctly with high probability and i'll talk through how we mitigate for those low probability events in the next few slides so the exact problems with this construction are the following so in the firstly in the commitment phase how does the client actually verify that the service commitment is constructed in a proper way so in the way that this protocol describes the problem being that the service commitment c is actually just a random ring element and the server could just send some maliciously constructed ring element in this point for example like a trapdoor ring element or some ring element that's not actually sampled randomly and the client wouldn't be able to easily tell secondly in the online phase i left the description of this encode mechanism generically but we are actually going to now have to define how we do this in such a way that the client's input is actually sampled as a random ring element given this x in a deterministic manner another problem is that the again the server can't verify the client's message in this phase so the client also can could submit a maliciously constructed ring element that has trapdoors involved or is not actually randomly distributed which may allow the client to learn aspects of the service key case so we have to prevent against that in the service response note as well that the key k is kind of encoded into the error term of cx and what this may allow is for the client to learn things about the service key by analysing the sort of errors that are involved here because this is no longer like a Gaussian distributed or a typically distributed error distribution anymore so we have to protect against that as well and finally i left the zero knowledge proof as a generic description but we're going to have to instantiate the required proof system for doing this in a way that preserves post quantum security as well so with these problems in mind i'll now talk through exactly how we finish up on our voprf construction so firstly i've had to expand now some of the distributions and some of the zero knowledge proofs that we use because we're going to have to use much more functionality here in order to realise a secure version of this voprf protocol what I should note actually though is the eventual messages are going to stay very similar and it's it's purely how we sample things that it's going to change what you'll also notice that in the commitment phase we no longer agree on an a based on what the server sends to the client the client server must agree on the ring element a a priori to the entire protocol moreover we've had to introduce some new error distributions specifically we're going to have this error distribution green um and this error distribution in violet so the error distribution in violet is going to be the same as the distribution that we sample keys from and this is a Gaussian distribution with standard deviation parameters sigma and then we have this green error distribution which has a standard deviation parameter of sigma prime where sigma prime is chosen large enough that any sample from the green error distribution drown samples from the uh violet error distribution and the key distribution and finally um the encode function which I mentioned before is going to be instantiated using the banergy piker ring lwe based pseud random function from crypto 2014 and I'll note here that um the output of that pseud random function is going to be truncated in the sense that it's going to be um a single ring element secondly the zero knowledge proofs um now that we introduced there's going to be three different zero knowledge proofs and there's going to be one for the server setup message in the commitment phase there's going to be one for the client's message and there's going to be another for the server's message essentially we need to do uh use these knowledge proofs in order to ensure that the client and server messages throughout the protocol are well um constructed um and so that in the final security proof the simulator that we use can extract the secret inputs that both the malicious um clients and servers use and what I should note though is that these zero knowledge proofs can be instantiated using um newly discovered methods by yang etau and buelins from crypto 2019 and eurocrypt 2020 respectively and that the war buelins methods um specifically are quantum random oracle model compatible which enables us to situate our security proof in the quantum random oracle model as well I'm not going to talk about exactly how these uh zone knowledge proofs are constructed as their constructions are very heavily involved but I would encourage you to see the paper specifically section four if you'd like to learn more details so firstly um going back to the original construction uh in the offline phase now the server constructs its message in exactly the same way but also appends this to a knowledge proof pi zero uh which attests to the um the construction of the message that the server gives that the client can then verify in the online phase again the client constructs its message in the same way except now the client samples um errors from this vital error distribution which is the smaller error distribution and also sends this pi one which attests to the construction of the message and the server then can verify it and finally uh the server sends again the same message and this pi two um proves that the server in zone knowledge that the server has used the same key k in this message and in the original commitment to see correctness holds due to exactly the same argument because the actual format of the messages um has not changed beyond um using these different error distributions and I'm going to talk now why these uh using these different error distributions is fine even in the correctness argument so recall that the client's protocol output is this uh rounded uh ring element yx which is equal to uh ax multiplied by k plus this error term all rounded with respect to this modulus p which is chosen much smaller than the uh ring lwe modulus q so the veracity of our correctness argument is actually based on the computational hardness of the one dimensional short integer solution problem how this works broadly is that we can show um quite easily the due to sampling all of these error terms and keys and uh secrets from short distributions we can bound the size of this error term with high probability within this boundary within these boundaries of minus t and t and then note that for for correctness not to hold we'd need a coefficient of ax k to be within minus t and t of a rounding boundary um because in doing so um if ax k was close to this rounding boundary then adding on this error term would push it over a rounding edge and would lead to a correctness error in order to use the hardness of one dsis in our correctness argument we essentially show that if any coefficient of ax k was in this set so it was within minus t to t of a rounding boundary then an adversary could solve the one dsis problem secondly so once correctness is argued we then need to uh focus on malicious security so we need to show that there's a simulator that can interact with the ideal vopf functionality and simulate the real protocol to any malicious server or client so essentially how our security proof works is that the client um or the server in a malicious sense sends the message and providing that the zero knowledge proof that the adversary constructs does verify the the simulator can extract the secret inputs from those zero knowledge proofs using the knowledge extraction property of those um of those proof objects and once it extract those inputs it can simply forward them on to the ideal functionality and learn an output from that functionality and then it it just has to create the remaining messages so firstly the messages that the simulator creates can be made indistinguishable based on the hardness of the ring lwe problem in the sense that the messages in the protocol should all be randomly distributed ring elements secondly in order to finish off this security proofs we need to prove that the correct outputs are learned both in the client and in the server case and again we lean on the hardness of the 1dsis problem and note that if any zero knowledge proofs fails at any time then the simulator simply aborts and that would be exactly the same as in the real protocol as well so this simulator for both malicious clients and servers simulates the real world exactly based on the computational hardness of these problems finally i should know that the malicious client proof actually only holds in the average case in the sense that it only holds on the server sample with its key from a ring lwe hard distribution so with now our construction given we'd like to talk through the efficiency of our protocol and some of the parameter settings we have to make in order to inform how close we are to being able to realise a post quantum voprf for some of the applications that i highlighted previously so firstly drawing attention now to some of the exact parameter settings that we make for our our lattice-based foundations the things i'd like to draw attention to are the super polynomial q and sigma prime that we have to use in order to realise our construction so this is a barrier quite a significant barrier to realising the fish and construction and we require doing this in order to meet two guarantees so the first is that the pseudo random function that we use underlying our protocols of the banner g pike at pseudo random function requires a super polynomial sized modulus q in order to achieve security in the for the ring lwe problem so that's one invariant and the second is that in order to realise this noise drowning approach in order for the server to hide its secret key k in the response to the client we must also use a very wide error distribution which again then forces our eventual modulus q to be a super polynomial in size so in terms of concrete sizes based on those asymptotic settings if we want to realise 128 bits security then roughly speaking we'll need a q that's 256 bits long and we'll need quite a large dimension parameter n in order to realise that and that essentially results in ring elements that are 0.5 megabytes in size and given that the client's server both exchange a ring element then this will lead to at least one megabyte of communication however that's ignoring the zero knowledge proofs and instantiating the zero knowledge proofs using for example the proof system of yang et al we would require essentially two to the 40 bits of communication per repetition and while with our parameter sizes we can reduce the repetitions quite substantially to around two in order to achieve a secure protocol two to the 40 bits of communication is going to be in the order of hundreds of gigabytes and so our protocol is very clearly not going to be anywhere near efficient enough to run for the applications that I highlighted note however that in a practical setting you would probably try and instantiate these proofs using snarks or Starks although we haven't done the research into how exactly that would how that would work and again for our parameter settings we highlight some more of these details in section 5.3 so clearly our protocol is not going to be practical enough and one of the major reasons for that is the zero knowledge proofs so in the paper we highlight some possible ways of optimising our protocol in order to reduce or remove some of the zero knowledge proofs that we use so firstly we should note that in the commitment phase it may be possible to use a trapdoor during element a instead of a zero knowledge proof because essentially we use this commitment in order to extract the server's input key k and using a trapdoor ring element would allow that. Secondly we may be able to remove completely the server's zero knowledge proofs ekp2 in its response message to the client using a cut and choose based approach where the client would send multiple queries and the server would evaluate the same via preference each of those queries and the client would then choose a subset of the responses and check that they are well formed so obviously this comes at the expense of sending larger client queries which may or may not work depending on the application and finally while we only presented a protocol for a single query we should note that actually a server can generate a single zero knowledge proof instance for multiple client queries and so the client could send n vopf queries and the server could send n responses with only a single zero knowledge proof attest into the fact that they all contain an evaluation using the same secret key k as is committed to in the commitment phase. So with those optimisations in mind we can then compare our protocol with the previous designs so firstly the the yarychia tau construction in the classical setting is very small in the sense that the concrete communication cost is around 128 bytes in terms of the caveat that it's only secure in the random oracle model. Moving on to the post quantum construction so the one of bone a tower for major crypt 2020 has a concrete cost of around two two megabytes which is obviously much much bigger than the classical setting but is still possibly within the realms of practicality and this construction is is post quantum secure based on the hardness of problems in the isogeny base setting. Then our construction in the ring ldwe setting our concrete costs are kind of like one megabyte so one ring element between the client server and then one ring element back but we should highlight that all of these costs come with the caveat that we must include zero knowledge proofs to ensure that both the client and the server are acting honestly and these zero knowledge proofs cause much much bigger communication costs. With those comparisons in mind I'd just like to highlight some of the conclusions of the work and then some of the open problems which we think require solving. We can now build post quantum vopfs using or assuming the hardness of well known lattice based problems so this complements the results of bone a tower however all all post quantum proposals currently including ours suffer from very expensive costs due to the zero knowledge proofs that are required and also the large parameter settings which are required to ensure security. So in terms of future work we think it would be really valuable to realise a more efficient post quantum vopfs in order to potentially realise some of the applications that are highlighted in the internet setting. So the first thing that we think would be really valuable would be to reduce or remove all the zero knowledge proofs while trying to still ensure verifiability in order to reduce some of the parameter settings removing the noise drowning approach which we used to highlight how the server needs to protect its key in the response to the client and also using potentially a more efficient pseudo random function alternative to the Banerjee piker construction from 2014 may allow us to use a polynomial size modulus q which would then would significantly reduce the size of the ring elements that the client service and between each other. So one thing I haven't highlighted is the potential for using generic methods such as garbled circuit based approaches or other secure computation mechanisms for constructing vopfs between two parties to do a detailed comparison between these custom based approaches on based around isogenes and ring lwe or other lattice based foundations with these generic methods to see in the long term which is going to be the most efficient way of constructing these protocols with post quantum security. So just to finish I'd like to thank you for listening to our talk please do get in touch with any of the authors if you have any questions and I'd like to thank the organisers of pkc for allowing us to speak thank you