 Good morning, everyone. It's a real pleasure to have you all in your living room as for myself. I am also in my living room. As you can see, there are coaches behind me. So it's a real pleasure today to announce our first guest, Professor Yehuda Lindel. Yehuda Lindel is CEO of Unbound Tech and also professor at Bar-Ilan University in Israel. Yehuda attended his PhD at the Weissmann Institute of Science in 2002 and spent two years at the IBM Tate G. Watson Research Lab as a postdoctoral fellow in the cryptography research group. Yehuda has carried out an extensive research in cryptography and has published over 100 conferences and journal publications, as well as one of the leading undergraduate textbooks on cryptography. Just a side note, it was my textbook on cryptography and I spent a lot of time reading it. Yehuda has presented at numerous international conferences, workshops, and university seminars, and has also served on program committees for top international conferences in cryptography. In addition to Yehuda's notable academic work, he has significant industry experience in the design and deployment of cryptography in a wide variety of scenarios. It is a real pleasure to have you, Yehuda. Please proceed to your talk and have fun. Thank you. Thank you very much and I want to thank the organizers for having me. It's a great opportunity and it's really great to be able to get this done in these times. So I'm sharing this screen and I hope you can all see it. So I'm going to talk about the title is the path to software-defined cryptography, environment-multiply computation, or really how we can protect cryptographic keys in software in a way that actually provides strong guarantees. So just as some context in the talk, cryptography is a central tool of computer security. There's a lot more to security on cryptography, but it's something that is very central to it and crucial to it. And all of cryptography, as we now realize, are secrets. If you want to do key exchange or encryption or digital signing and want to authentication, you need to have some secret and that secret is what differentiates the attacker from the legitimate users. So once such a secret is stolen, then all security lost. It's really a bullying situation. If your key is well protected and you're encrypting well, then you're secure and if the attacker steals the key, then there's absolutely zero protection. And not only is it zero protection, it also becomes the easiest attack possible because, in fact, once the cryptography is working and you can break the cryptos, then you can actually attack typically via the allowed API, and that goes undetected. So this is different from other types of attacks where the attacker has to behave in an ominous way and hopefully we can catch them when the cryptography is working and often becomes just the easiest vector of attack. And that means that we need to protect the keys. We need to protect them in a strong way so that it gets stolen. And the legacy solution was simply hardware. So you have HSMs, hardware security modules, you have smart cards, you have one-time password tokens, and these are supposed to be very secure boxes that prevent the key from being stolen from them. Actually, the vast majority of effort into these is around physical security, which is somewhat strange today because very often HSMs sit in very secure data centers. But in any case, these are supposed to prevent the key from being extracted by having a limited API that will only do cryptographic operations and nothing more. And even leaving aside that the security of these actually is not perfect, and if you just want to see one example, look at the talk at Black Hat last year, Black Hat USA last year by Ledger about a complete break of an HSM. It was also the real-world crypto conference in January this year. But even leaving that aside, having these physical anchors when everything is virtual brings many difficulties. The entire computing world is a software, virtualization, containers, cloud, and that's because there are no more problems of procurement, and you can remotely manage them, and there's a cloud economy, and you can have a single interface and environment across whether you're in your data center or in a cloud, and so on and so forth. And therefore, these hardware anchors actually are a huge pain when it comes to providing a strong cryptographic infrastructure for an organization. And the question that we're asking is, what if we could move to software? Now, obviously, you could move to software, right? I mean, just instead of having an HSM, take that same type of code and put it into a virtual machine and you're done. But of course, what we're really asking is, what if we could move to software in a way that would still provide a very high level of security, a comparable level of security, if not even better? And that would remove one of the big obstacles to deploying a strong and usable cryptographic infrastructure in organizations. And the approach that I'm going to present in this talk is that of using a secure multi-party computation or MPC. This is a well researched software, the cryptography research began actually in the late 1980s. There have been literally thousands of research papers about it. It spent a long time being in the realm of pure theory. I would argue that for the first two decades of its life, it was pure theory. I started researching it in 1998 in the beginning of my PhD. And I fully admit to being a pure theoretician and enjoying it very much, how to define security, how to prove security, what assumptions are needed to get security. As time went on, and particularly in the last 10 to 15 years, it's actually become a very applied area of research. A lot of MPC research today is not only published in the cryptography conference, it's also in the security conferences and things like ACMCCS, which is one of the best academic security conferences, you'll see a lot of MPC research. And the idea of MPC is that you can compute on private data without revealing anything, something which sounds like a bit of a paradox, like a bit of oxymoron. How can you compute on something without actually seeing what you're holding? So I actually want to start with a toy example and then go into more details later on, and how this actually connects to the notion of protecting keys is something that we'll get to after we understand what the notion is itself. So the toy example that I'm going to give is we have three cryptographers and they want to compute their average salary, but without revealing it to anybody else. If you can actually see the salaries here, they are so low that they're just embarrassed to tell each other and their cryptographers. So anyway, they're paranoid and don't want to tell anyone their salary. And maybe they want to do this, so they want to know are they earning below average or above average? Of course, if they're earning below average, they'll be outrageous. They're earning above average, let us think, yes, that's because I'm a better cryptographer. But in any case, say we have Alice, Bob and Eve, and how can they work out their average? So the first step in this toy example is that Alice is going to choose a very large random number. Let's say 652,195. And exactly the domain, I don't want to go into the details, the domain of the way you choose this actually makes a difference. But let's just assume it's a very large random number and much larger than their potential salaries. And then Alice would just add that random number to her salary and send it to Bob. So Bob gets this number 772,195. And what I want to note before we continue is that from this number, Bob actually cannot know anything about Alice's salary or essentially almost nothing about Alice's salary, because the large random number drowns out the salary. Alice's salary is 80,100 or 140, and you're choosing a random number from a large enough domain, then this will actually look exactly the same. So Bob gets this number but doesn't know anything about Alice's salary. And what Bob does is takes the number and adds his own salary to that number and sends it to Eve. So Eve now again gets 877,195, which is just the number that Bob got plus his own salary of 105,000. Once again, Eve looks at this number and says, well, it tells me absolutely nothing because Alice's original random number again drowns out the actual salaries in there. And Eve does the same thing. She now adds her salary to that 65,000. Eve is the unfortunate cryptographer with a lower salary. And she sends that over to Alice. So Alice gets the number 942,195, which is essentially the random number that she started with plus the sum of the other three salaries. So she can just subtract that random number and divide by three and she gets the final result of 96,666. And what I want to argue is that nobody learned anything. We already talked about why Bob and Eve didn't learn that they didn't learn anything because all they saw was a random number. Essentially the salaries that were there were drowned out by that original random number. Alice, on the other hand, gets back something that actually has the sum of the salaries, but nothing beyond that. And she can't determine anything beyond the sum of Bob and Eve's salary because she just gets it in one number. And we know that the average and the sum is actually the same. It's just multiplying or dividing by three. Therefore, this reveals nothing more than the average. And no individual has learned anything but the average from this procedure. And this is a protocol that's actually secure up to one party being corrupted. If two of them want to collude against the third, well in that case anyway, you shouldn't be computing the function because it revealed it. But this just gives you an idea that it's actually possible to compute on data without revealing anything whatsoever but the result. And that's a very powerful paradigm. And I want to stress that in the late 80s when this was initially researched and going forward, it was actually shown that you can compute any function that you want. Now I'm not saying you can compute it efficiently, that actually could be run in practice, but you can actually compute any function that you would want in a secure way. With more detail, the setting of secure computation is one where you have parties with private inputs, and they want to compute a joint function of their inputs. I've been ensuring that nothing but the output is learned. And I'm calling that property privacy and also ensuring that the output is correctly computed, which is correctness. And you could think about many applications for this, comparing DNA, comparing databases, carrying out SQL queries on databases that are held by different parties. You can think about set intersection. I want to compare what contacts are in my address book versus somebody else's without revealing anything about the context in common or even without revealing anything about how many contacts we have in common. And there are many, many applications for secure computation. One thing which is important to understand is that these properties should be guaranteed even if some of the parties are adversarial. So they're cheating. And there are two main classes of adversaries. One is a semi honest adversary. That's such an adversary actually runs the correct software, but tries to learn more than they're allowed to by looking at the transcript. That's a relatively benign adversary in the sense that they actually are not trying to actively cheat. And it can model some things like inadvertent leakage. But in general, we much prefer to talk about malicious adversaries where the adversary can actually run any software that they want. They should not be able to learn anything. And even if they know all the protocols being used and the design and so on and so forth, we actually can guarantee that they're unable to learn anything no matter what attack that they run. And furthermore, given that this notion rests on a strong theoretical foundation, the security is mathematically proven. So we actually have a mathematical proof of security that nothing can be learned. Of course, you have to be sure that our proof doesn't have any vibe, you have to be sure that our implementation is actually implementation follows the actual spec, but that there are really orthogonal problems that exist everywhere. And now what I want to argue is that we can use MPC in order to protect keys in software by moving from this sort of centralized trust model where a key sits somewhere. And if the key is sitting in secure hardware and that truly is secure, then okay, but it has all of the functional and operational and other problems that come with it. But if it's sitting in software, any single machine, then it becomes a single point of failure that if you steal that key, then you can lose everything. And basically what we want to argue is that we can use MPC to build a distributed trust model where keys are shared amongst multiple machines, but using a type of sharing so that no subset of the keys actually reveals anything whatsoever. So a subset of the shares just looks completely random, you'd actually have to have them all. And further, the shares are never united even in use. So it's not like, okay, we bring the shares together for one tiny fraction of a second into the operation. We don't want to do that because if that happens, then at that point it's actually vulnerable to be installed and then you have again the single point of failure. But using MPC, it's never brought together and even if a subset are corrupted and controlled by an adversary and who has full run malicious software, etc., they still cannot learn anything. And the protection level you get is based on strong separation. I'll give you an example of that later on. Another advantage is mitigation against insider threats. If you split a key into even just two pieces and you have different administrators now no longer, do you have any administrator having access to the key? And that's a big advantage. So to sum up how we can get this, you can think about splitting a key into two pieces. Each share is just complete random garbage without seeing the other one. And we'll see later on in the talk how actually this can work. And you can store these shares on separate locations. And then you do something called refresh, which means that you change the sharing very frequently. So the key itself stays the same, because changing keys very often is extremely painful and not practical. But this sharing itself can change. And then what that means is the attacker would actually have to breach both machines at the same time before a refresh happen in order to learn anything. Again, I'll go into that more detail later on when we see the actual example of how this can work. The keys are never combined and in fact, it even can be generated in a way that's distributed. So they were never at any point in their entire lifecycle from generation through use. The key was never in any single place at any single time. And again, mathematically proven security properties. So we're not just waving our hands and hoping for the best. We're actually resting on strong theoretical foundations from decades of research. So I just want to give an example of how you could deploy such a, we'll call it a virtual HSM, how you could deploy such a thing in a way that would actually make sense in a real scenario, commercial scenario. So think about an organization that wants to be on both in the on premium and data center and also in multiple clouds, then they could take and let's assume we're splitting a key into two pieces. So each key is shared is split into two shares. And then we have a pair of machines which one we call EP and the other we call P. So EP we call entry point and the other we call P for partner. And the entry point is the machine actually gets the request to doing a cryptographic operation. So this would be the same field of a network HSM or, you know, you use the standard library, pks11, java crypto or Microsoft CNG or open SSL or KMIC, any of those can be fully supported. The entry point gets the request for cryptographic operation and now runs MPC with its partner and returns the result, the key never ever being brought together. So in this specific configuration, you can see that you have one pair split between the the organization's data center and AWS, the second pair, the second pair being split between AWS and Azure, and the third pair being split between Azure and the data center. What this gives you is a combination of very high availability. So and robustness, you would need to have more than one cloud and the data center go down. So you need to have two out of these three going down. And that's very, very unlikely, especially because you can replicate this, of course, amongst multiple regions and locations because it's all software. So there's no problem just having automatic replication. That's first thing. Secondly, the attacker has to simultaneously breach two completely different settings to complete different scenarios. And that's very, very difficult and even a little bit hard to imagine that it could happen. And especially they have to be resident simultaneously on both. Of course, there's no perfect security. And if the adversary can do it, then this theoretically can happen. But it's very, very hard, almost inconceivable to breach such different scenarios, such different settings at the same time in order to be able to gain any key information. Of course, all of these machines are further harder and then you do all of the standard measures that you would do anyway. But this shows you how such a deployment could work. If you have this separation and you can do this NPC, then you would get something very strong, very powerful with high availability, disaster recovery, replication and all in software. Give you another example of how you could use such a thing. And then we'll get to, okay, but how does it actually work? And we are going to get to that, I promise. So let's think about the fact we wanted two-factor authentication. We have these mobiles, they're powerful computing devices, and we are using these actually for two-factor authentication. There's Google Authenticator, there's Microsoft for Office 365 and there's Salesforce and there's others. But these devices are all extremely vulnerable. So the key sits on the device, if an attacker, if malware is installed on the device, and that's not very difficult to do because half of the apps in the app store are actually malware, especially in Android, but iOS also has its problems, then the key is vulnerable and you can steal it and then obviously bypass everything. So how can we build a virtual smart card or a virtual one-time password token on the mobile and not have to have these physical devices but still get security? So the same idea, exactly. You split the key shares, not between two servers, but you split the key shares between a mobile and a server and you carry out the computation via MPC. And that way, the keys never present present on the mobile at any time to be stolen because even if there's malware and you can run malicious code etc., you cannot learn anything about the key. Furthermore, the refresh can actually take place at every single operation and at frequent intervals and that gives you strong anti-cloning and detection because if you could get a clone of the machine, then the clone and the real device would go out of sync. If the real device used the key first and the clone would be complete garbage and if the clone used it first, then the real phone would fail and we would actually have full detection of that attack took place and that's something which you typically don't get with hardware solutions. You can also audit not fully everything at the server because you require a server to do the operation. You don't have full visibility into all operations because you're auditing in both places and that obviously is very important for security perspective. And finally, it's easy to use because it's on a mobile and that is something that everybody always has and you don't need special hardware, you don't have to worry about delivering it and procuring it and so on and so forth. And it's also a security advantage because people will leave their smart cards and one time password tokens lying around but nobody will leave their mobile anywhere lying around and if their mobile is stolen, they will know about it very, very fast. Okay, one final advantage which I think is really important to talk about because this now goes a little beyond what you can do in other using other technologies is moving from key theft to chemist use. The standard model for key protection for cryptography is preventing the key from being stolen. And if you have a database of 10 million credit cards and they're encrypted and the attacker can steal the key, obviously they steal all 10 million credit cards and that's a disaster. If they can get to the machine that acts as the HSM, then they can ask for decryptions and maybe they can do 10,000 before their court and that seems, you know, that's okay. It's not the end of the world. So the chemist use is not prevented but it's also, it's not too bad an application. But there are many other applications where chemist use is a disaster. For example, code signing, a single malicious signing by an attacker is a complete failure. For example, you can be the ASUS firmware which is corrupted and now you can infect many machines. It can be a banking application and it's not enough to protect from key theft because I just need one signing operation and I can get and I can now deploy a malicious banking app that customers will accept as being valid. So using MPC, we can split a key not between two machines but actually between multiple machines and we can define flexible forums that will say, for example, you need two out of three parties at R&D and one out of two at legal to approve the code signing operation. This actually has gone through everything you need. In fact, you can have arbitrary size sets and numbers of sets and you can set up really elaborate business processes that are actually cryptographically enforced and this is what's really important. All of these parties actually take part in the MPC so it's not that you can bypass them and get to the machine that does the signing. If they don't approve, they actually hold cryptographic material and even if you get to the machine that does the signing, they can't do anything because the key actually is not even there and you can set the quorum sizes to be smaller or larger depending on need and get very, very powerful key misuse protection. All right, so now that's really all the background on why we want to do such a thing and what value we can get and I want to stress that these are not theoretical but these actually things exist and are running in production and are being used around the world already today so this is not just something that can be done in theory, it actually is possible to do this in production and get very good performance. What I want to show you now is how this can actually work and I'll start with a friendly fun problem and then get more into some math. I'm going to talk about the dating problem, something that the introverts amongst us maybe call our days in high school and let's say that you want to ask somebody out, a guy and a girl want to check if they're both interested in going out for example and if they both are then I would like to hear yes but if at least one of them is not then the output should be no. So if I was interested and someone else wasn't then the result should be no and why does that help because when I ran this with Alice, Alice said no and I said yes, the result came out no. Alice actually doesn't know that I said yes as far as Alice is concerned it may be that I said no as well so you could think of running this between all pairs of students in the class or something and then nobody loses face and we're sort of protected and we don't mind telling the truth about whether it would be interesting doing this or not. Technically actually you can see that what they're actually doing is computing the AND gauge. So we can actually do this with cards Alice and Bob each get two cards and by the way the reason why this is interesting beyond what we showed before about the salaries is because the salaries have you know it's numbers you add them they're in a nice algebraic structure but this is something and AND gauge is something that it seems more difficult so let's see how it works. So Alice and Bob each get two cards if Alice likes Bob he puts the king she puts the king first and the ace and if not she puts the ace first and the king and Bob does the same thing but in reverse if he likes Alice he puts the ace and then the king and if not the king and then the ace and I know that anybody looking at this is going to say hi I don't know what you're talking about but bear with me you will in a moment. Then each turns their card over and they put an ace in the middle okay so Alice puts her cards down Bob puts his cards down and we have an ace in the middle and we note now that if you were to turn them all up wide that if Alice and Bob like each other you get a king ace that's Alice's order the ace in the middle and then Bob's cards which is an ace and the king so you have three aces in a row but if you don't if I don't like each other then it's going to be one of these three configurations and what you can see that all of these three configurations are actually exactly the same up to a rotation so there's always one ace a king two aces and a king even the third one the third configuration here it's also two aces in a row because the first and last are actually next to each other when you think about rotation so what that means is the parties will turn over the middle card and then randomly rotate to us we pick it up and rant and rotate Bob will rotate and they'll open it up and look other three aces in a row up to rotation or are there only two aces in a row up to rotation and they'll know where they're going out for a date or they're not and the point is that because the three configurations which relate to either Alice saying no or Bob saying no or both saying no all these configurations are identical so actually they have no idea which one is the correct one and all they know is at least one of them said no but I don't know if it's one or two and therefore even introverts can can solve the dating problem without losing face and hopefully we can help introverts out but let's come to let's look let's come to something which is more serious which is how can we compute the RSA function so I have all the modular notation in here but really you can just ignore it it's not important what's for the understanding what's happening here so in RSA the private key is a modulus n and an exponent d and the public key is a modulus n and an exponent e and the private operation either for signing or decrypting is just to take some value y and raise it to the secret value d okay and if it's decryption or signing it that would change what y is but the operation the secret operation is raising to the power of d and that d is the private key that has to be kept secret so the first thing we want to do is share this value d between two servers s1 and s2 so s1 can just choose a random d1 it's a random value and server s2 will get d2 which is d minus d1 the mod here is 5 and not n for technical reasons that I don't want to go into here and what you can note is that d1 plus d2 equals d okay so they hold two random values that sound to the real secret key and the security just at this stage to note is that d1 actually reveals nothing about d because it's just a random value it's a random value completely independent of d so obviously you can't reveal anything but d2 also reveals nothing about d because d1 completely hides it in fact d2 you can look at it as like a one-time uh a pad encryption of of d and so each server s1 or s2 having d1 or d2 actually know nothing whatsoever about the key d and now we have this situation where they hold additive shared values additive shares of d and you can compute the private operation y to the power of d by s2 computing just y to the power of d2 s1 computing y to the power of d1 and then multiplying them together so just multiply these two public values now z1 and z2 together and verify that it's correct and what you can note is that because you're multiplying two values that have the same base and different exponents you're just adding in the exponent and when you add in the exponent then uh you get that from y to d1 times y to the d2 you get y to the d1 plus d2 which equals y to the power of d and so they've actually computed the correct operation but neither of them saw the full key d to each one just did a local operation and we combined it together and nothing was revealed in an actual deployment type scenario you can think of two servers holding d1 and d2 some client wants to do some decryption it sends a ciphertext the first server which for example we call the entry point in that cloud configuration sends a ciphertext to the second server the second server sends back what we call that partial decryption which is z2 which is y to the d2 and the first server then computes again y to d1 multiplies them together and sends back the result um one thing that I want to say that is just a stress is that we actually have formal definitions about what security would mean here and that nothing more is revealed about the key than the result and we can formally prove this protocol according to that definition I'm obviously not going to kill you with those details here but actually you can write a formal definition and a formal proof that this protocol reveals nothing more than you would get from just getting this the pair of ciphertext and plaintext so if RSA is secure then this protocol is also secure and this whole dignified one of the servers is corrupted and you have an attacker running malicious code they cannot learn anything at all well what about the refresh as I mentioned we want to change the sharing of the secret frequently and we'll see in a moment why that it's actually very helpful so we have two servers that holding d1 and d2 and they can do coin tossing it's another npc protocol to get a random value r that neither can influence and then the first server can compute d1 prime which is d1 plus r the second can compute d2 prime which is d2 minus r and then note that d1 prime plus d2 prime actually also equals d because you're just adding r and subtracting r so you've actually haven't changed anything and then I hold a different sharing of exactly the same secret so nothing they can still continue working but if an attacker stole d1 and then later on a month later managed to breach the other site and they were no longer in the first site and they stole d2 prime all they would get what would be a pair of values that sum to complete garbage because d1 plus d2 prime has this additional r value in there and nothing is revealed so that's why we do these frequent intervals and this this is something called proactive security in the academic literature and we call it refresh here so that this is pretty easy you might be looking at saying okay that's not you know that's not so it doesn't look so difficult and you're right that rsa is very easy because it has really nice algebraic structure elliptic curved ify helman and ecdsa are harder but they're still not too bad the real question is how could you do something like symmetric encryption or h mac or something that you know aes and these functions are just complete chaos there and permutations table lookups they have no they have no algebraic structure at all how could you can compute this and the answer is we can use something called a garbled circuit so we actually take those functions and we we represent them as a boolean circuit so going back to the beginning of undergrad computer science in discreet math courses or in computability courses we can we can represent any functions a boolean circuit and then we construct an encrypted version of the circuit and we can evaluate that encrypted version without revealing anything but the result and i'll show you how that works now so this is a simple and gate input wires u v output wire w here's the the truth table of that of that gate so one on one goes to one everything else goes to zero and the first step is to actually replace all of the zero and one values by symmetric keys okay so k u zero and k u one represents zero and one and likewise k zero v k one v but we can't give you this table because in the third column you can see that there are three values that are the same and only one is different and you automatically know that they are the zero value versus the one value and of course this holds even if you permute the order of these rows randomly you would still be able to learn but i can give you four ciphertexts in random order and if i give you these four ciphertexts if you have one key for the u u wire one key for the v wire you would be able to decrypt exactly one of the ciphertexts in the output so for example you had k u zero and k v one you could decrypt the second ciphertext this will be in random order so you wouldn't know that it was the second one nothing else would correctly decrypt you would have no idea actually what you've computed because they all just look like random garbage k u zero and k v one look like random garbage and k w zero is random garbage so you've actually computed something completely obliviously without knowing what you're doing how can we now use that idea to garble entire circuit so we have some circuit representation of a function so we do the same thing we write random values on each wire and we build three of these tables which actually again each one is in random order for here it's not we build a table which provides mappings from the inputs to the outputs and we also provide and we also provide a output translation table which which enables you to map you want to be able to map the values at the end into what the actual output is this will become clearer once I actually go through how this computes so now let's say we have the input of zero one zero one I want to compute the circuit over zero one zero one then what I can do is I can give you the key for the zero value on the first wire for the one on the second the zero on the third and the one on the fourth and I give you these four values they just look like random garbage and I say now I want you to compute the circuit you don't know what you're computing but you could do the following thing you know kb one and k zero c you don't know whether they represent one or zero but you have these two values and you can go to this table and you try to decrypt one at the time and you end up decrypting the third row and now you actually are able to decrypt only that value and you get this k zero e as the output you don't know what it represents but you have this value and now you can see that you have k a zero and k e zero you do the same thing you decrypt you try to decrypting each one you only succeed on the first you get that output value likewise you now have k e zero and k one d you manage to get the second value here then you get k one g at the top here and now you go to the output translation tables and you get zero and one and what I want to stress that what's happened here is you now know the output is zero one you actually don't know what the input is the input could have been indeed zero one zero one but it also been zero zero zero one or zero one one zero all of these would lead to the same output and you would actually have you have no idea which input it is and this might seem incredibly expensive and indeed the a yes circuit has about 20 000 gates but with many optimizations you're going to send these for about 5000 and we can do this entire thing in under a millisecond we can actually construct a govern circuit and we can and we can evaluate one in under a millisecond and that's how you can compute even symmetric functions like a yes and even a gc m and h mac also an npc even though they have no structure whatsoever and interesting to notice this is a protocol dating back to yaw in the mid 80s uh in the mid 1980s that was a purely theoretical protocol at the time but given with many years of optimizations and algorithmic and cryptographic improvements it's actually something that's in use in and in production today which is an interesting win for theory in my mind and showing that theoretical research does uh have uh especially in in cryptography sitting in many fields does eventually pay off and become something practical i'm running out of time to know to skip a couple of things here i just want to note that using this you can really virtualize cryptography get something that supports business needs has improved management operations personal in software and without giving up on security because npc gives you uh proven guarantees you can certify it with a by a fifth and and uh it's transparent and agile because it's software and that's a lot of things that we're looking for in cryptographic solutions so thank you very much and i'll open it up to questions professor lindel it was a real pleasure to listen to your talk especially for me which have a special inclination toward cryptography so we will leave we will start asking questions on the the cdow app people will still have time to ask questions so as we said on the twitch chat we can upvote questions if you feel the question is relevant or if it's close to a questions you wanted to ask otherwise feel free to ask questions so without further ado let's start with the most upvoted question uh what has been implemented for large scale data pipelines so we know big data is a thing now and people have to learn stuff from large on a large scale between potentially conflicting enterprise so can you tell us for npc on in large data scenarios that's actually more difficult in other words we can actually use techniques to compute essentially every and anything but on very big data becomes much more challenging so when you want to compute smaller operations like cryptographic operations that's very efficient we do have efficient protocols for things like set intersection we can compute the intersection between very large sets in fact group produces this for um for validating how how many how much people should pay for advertising based on how much advertisements convert to actual purchases um you can do quite interesting things on for sql on data that's separated and there's been quite a bit of work in the academic community around doing learning so federated learning or classification but more if you have a model and you want to you have an example and you want to verify you want to get a result on that example without revealing it these things actually are very efficient but you shouldn't think that okay wonderful now i'm going to run npc on the facebook graph and be able to have a social network that is fully private uh that would be much too slow yeah thank you yeah i saw that uh google released the private join or an intersection of of data sets i think that's uh using some kind of the techniques maybe that you discussed but maybe not the garbled circuit model but yeah so they use npc they use actually a version of set intersection which is based on elliptic curved if you helmet or uh that that's sort of the underlying technology they're actually much more efficient methods today they wanted to choose that because very simple which has other advantages but there are many different techniques and we can do set intersection very very fast today perfect thank you so next question next question in many industries any cryptography use will depend on its acceptance from standards what is the status of standardization for the cloud implementation so the uh the point here is that npc only changes the way you compute the function but we don't change the function at all so we can provide the full standard crypto api that you're used to it's 5th 140 that's too certified so it's all needs to prove the algorithms and therefore actually there's no problem in adoption because uh you can look at this and work with it as you would with any other standard cryptographic library it's just that you're getting protection in software rather than uh not protecting or or in hardware and that's the big advantage but but there is no uh i think it you know as a cryptographer the last thing we'll tell anybody to do is to use a proprietary cryptographic scheme or something else no we only want to use the standard uh the standard schemes and and npc enables you just to compute it in a way where the with the key shares are separated but the algorithm is the same standardized certified algorithm that everybody accepts and and therefore it's not a problem if it's just a side question to this question in your opinion what would what is preventing large-scale adoptions of such technologies right now so i don't think that it's being prevented it's actually uh uh there are companies i'm about to quiz one of them who are actually doing this um it's new technology there is uh uh we need to educate the market but the market has heard much more about npc today and we actually see these these things are in production quite significantly it hasn't yet taken over the world uh oh we hope that will happen but it's uh um not the situation of five years ago anymore of these these are actually being used uh already quite significantly perfect thank you so the next question is related how much work is still research work versus development it's quite open and that question yeah so i actually think that it depends what you're looking at if you have a very specific problem that you want to solve and there there is enough knowledge npc expertise out there to do it uh you but it's still a technology that requires higher expertise it's not like uh you know even to be honest even implementing anything cryptography requires expertise right we asked one an undergrad student who hasn't taken a post in cryptography to just to encrypt the chance of them doing a right to very small but this is very very difficult and requires really high expertise but if you have a specific problem uh this depending on the size of the problem they're actually quite good solutions in order for it to really be something that is ubiquitous the technology a lot more research is needed to really fill in the stack going from uh programming languages so having languages that you've been able to write code that have private and public variables and when it's private it finds an npc and it will know which protocols to use and there'll be guidance on how to program such a programming language in a way that's friendly to npc and there's a lot of research still to be done and there's a lot of research it's a very active area of research also pushing it faster and faster which is important because uh as we want to be able to solve more and more problems and the more efficient it becomes the more problems we can solve so there's certainly development and there's certainly a lot of research to be done i think there'll be decades of research still to come in npc and in fact uh even more um but uh it's already a stage that's certainly mature enough for for a real use in production perfect yeah let's hope people keep on researching if it becomes so ubiquitous so next question is a bit more technical does amplifying elliptic curve and iterating more more time brings stronger quantum resistance as we do by raising iterations on prime numbers factorization good question um even there you know there is uh uh there's discussion in in the community as to how effective uh taking very large primes would be for against the quantum computer currently we don't know there isn't really enough understanding because the number of cubits you need the quantum computer depends on how much error you have there are different uh there are different models personally i think that um you know just taking large if someone really builds a uh a quantum computer they can scale them break crypto and i think i'm in the if camp not a when camp i don't think it's it's necessarily we know the answer but if that actually happens i would probably uh be nervous about using something uh that just raises the size so that it's beyond the current state of the number of cubits and but there are other things out there so for asymmetric photographer we have lattices we have by such a reason there are other solutions coding based crypto uh right now i think really people worry about quantum they should just make sure the crypto is agile they can switch out algorithms quickly um because it's gonna it's still years away i don't think that most people need to be really concerned yet about changing algorithms yeah perfect thank you so it it uh it actually brings us maybe the questions was partially answered but uh how confident are we that the cryptographic primitives that you described so may need the the garbled circuit model is the quantum resistant so the garbled circuit model for example only uses uh the actual garbled circuits uses symmetric encryption only so if you take a yes 256 the best of understanding that's quantum resistant in the protocols themselves there are other elements but they can all be made quantum exist the resistance so you can use lattices for oblivious transfer and other elements that you would want in a protocol uh these uh uh npc can work also in a post quantum setting perfect so it's uh it's basically reduced to the grover search algorithm exactly yeah so obviously rsa itself wouldn't be secure anymore and the npc protocol for rsa that i presented is as secure as rsa just the fact that rsa isn't secure anymore means that it's meaningless yeah perfect thank you of course you can do so the the next question is talking about rsa uh specifically about the example that you that you gave uh in the rsa refresh scenario uh where we exchange uh a r should r be secret from a new dropper yes it should because if an eavesdropper saw r then they could steal d1 if someone got into the first the first computer installed d1 and then just eavesdropped and saw all of the rs then they would actually be able to keep up to date with the refresh whereas uh the whole point of the refresh is so that if you stole d1 or d2 it's only one of them and you didn't know a single refresh value just one refresh value then it becomes completely useless for for someone who has installed one of the d's that makes no difference that are but in general the best practice is all of these protocols are run over mutually authentic tls so this is an issue perfect thank you i think this is all we have as time for questions i see the queues that were running out of time it was a real pleasure to have you professor lindel and i had a good time listening to your talk thank you very much and once again i appreciate the opportunity to speak here and really impressed by the organization and and making this event happen so thank you well thank you thank you very much have a good day