 Okay, while we're setting up the next talk, let me just remind you that if you're standing at the back and you would actually like to sit down, there are plenty of seats at the front and there's also seating upstairs and there are also microphones upstairs, so if you do go upstairs, you're not losing the opportunity to ask questions later. So make yourselves comfortable, settle in for the long haul, and my speaker has just disappeared. Where's he gone? I guess he's getting mic'd up. So we'll just have like a one minute break now. Please enjoy looking at our list of sponsors. You made me nervous there, you disappeared. Yeah, I decided that. Maybe it's not a good idea to give it a talk anymore. I'll introduce you very briefly. Okay, folks, so we're ready now with the next talk in this session. It's a contributed talk on Black and Backers Attack. I guess you can't have a session on TLS without talking about Black and Backers Attack at some point, and it's one of the more interesting attack papers from the last year, and the talk will be given by Eyal Ronan. Thank you Eyal. Okay. Thank you for the introduction. I want to talk about the nine lives of Black and Backers Cat. This is a joint work together with Robert Gilliam, Daniel Genkin, Adi Shamir, David Wong, and Eyal Ronan. I don't really feel that TLS needs an introduction for this crowd, so maybe I'll just skip it. So you know there is TLS, you just heard the talk about it, and I want to talk today about a specific class of Cipher Swits in TLS, which are the RSA Key Exchange Cipher Swits, and they all use a padding scheme called PKCS number one, version 1.5, which we're going to talk about today, and although we didn't see any of those Cipher Swits in the statistics that were just shown, it was once the most popular TLS Key Exchange option available, basically 100% of all TLS connections used to use this option. However, it has a very, very, very, very long history of practical implementation attacks. They don't have room to put all of the cetaceans here, and basically it means that we had multiple rounds of vulnerabilities discovered, patching the code, patching the standards, trying to make everything more secure, and in the end we think that it was supposed to be relatively okay, but moreover, this doesn't even support for secrecy. So basically a few that we can all agree that today using the RSA Key Exchange is not such a good idea. However, at least still, let's check it in the end of 2019, it was still quite widely used. About 6% of the internet connection still used RSA Key Exchange. And this is, although we have much, much better alternatives that are more secure, probably more efficient, you can ask yourself, why is it so? And the reason, like in many other cases, is probably for backward compatibility. There's always going to be someone still using Windows XP somewhere, and we don't want to lose any client, so we need to support it. And if I'm not mistaken, TLS 1.2 standard actually mandates that RSA Key Exchange has to be supported. Okay. So what we did in our work, we went over nine different TLS implementations and tried to look at the security of the RSA Key Exchange option. And we found that seven out of those nine implementations were vulnerable to new type of cache-based padding oracle attacks that enabled us to retrieve data on the plaintext. And there were multiple number of vulnerabilities in different layers of the protocol implementation, and there's a lot of different oracles that we're able to find, and this is nice and interesting work. But now, let's assume that you're going to believe me. And I'm going to say that I'm able to break 6% of the connections in the Internet, and you might ask me, so what? I already mentioned that those connections are probably the ones that arrive from Windows XP machines, and they have much bigger issues than TLS connections anyway. So why should we care? We are all very security aware. We update our clients and our servers. We use the latest versions. So we shouldn't be affected by this kind of attacks. So what I think is the main finding in the paper is that we show how we can use this RSE Key Exchange vulnerability to actually implement a many-in-the-middle downgrade attack, and basically means that we can cause even modern, latest up-to-date clients and server to still use this RSE Key Exchange option, and then to do it in real time, we had to show a new way to paralyze this type of attacks. And in the result, we claimed that we were able to break 100% of all connections to servers that use this vulnerable implementation. And the nice thing is that this also works even if the client doesn't even support RSE Key Exchange at all. So even if, so as a client, you have no way to protect yourself as long as the server uses a vulnerable implementation. So this is not so good. So to try to understand how this type of attack works, let's go back to the basics. So this is RSE encryption. I'm sure that most of you have seen it before. We have an RSE modulus. We have a public exponent E that is used for encryption. Private exponents D that is used for the encryption. And we can encrypt large numbers with it. But there's a question that, how can we use this method to actually encrypt data? And there's actually some real-world cryptography problems in when we try to use this RSE encryption. And as an example, let's assume that we're going to use a public exponent 3, which was widely used in the past. And we want to encrypt the number 1,000. And we're still going to use 2,048 beta RSE Key. We want to be secure. And the one problem with that is that 1,000 cube is a relatively small number. It's much smaller than the modulus. And actually, calculating logarithms over the real is something that's really easy, especially if you have a calculator. So if you want to be safe, we have to make sure that the number m that we try to encrypt is large enough. Another thing is let's assume that we want to encrypt some very small domain. For example, the answer is to yes, no question. So if we encrypt the same number, for example, 0 or 1 over and over again, it's very easy to detect repetition. This type of encryption is vulnerable to what we call dictionary attacks. And if you want to be secure, we need to make sure that the number that we encrypt each time is going to look random. So how do we solve this type of problems? Here we have pkcs number 1, version 1.5, pattern scheme to the rescue. And this basically tries to solve the problem that I mentioned. This is how it looks like in TLS 1.2. We start with a two byte encryption preamble, the byte 0 and the byte 2. And this makes sure that the number is large enough. Then we have at least eight random non-zero bytes. This makes sure that we encrypt something that looks random each time. We have a zero delimiter that tells us when the plaintext actually starts, when we try to decrypt. And for TLS 1.2, we always have 48 bytes of a premaster secret with some specific structure. So this is how RSA encryption looks like for TLS. Now a short while ago, about 22 years, a cryptographer named Blechenbacher showed his novel adaptive chosen plaintext attack that basically exploits the fact that when we try to decrypt ciphertext, we validate that the plaintext, depending on the plaintext, is valid. And the way this attack looks is relatively simple. We have our client encrypt some ciphertext and send it to the server. And then we have our malicious attacker. This malicious attacker records the ciphertext. And now what he tries to do is try to use the server as an oracle. And this oracle is going to answer the following question. For every ciphertext I provide to the oracle, does the decryption of the ciphertext start with the byte 0.2, as it's supposed to do in the padding scheme or not? And all of those different vulnerabilities that I've shown you previously, the whole goal is to use different sectional to give us this oracle, to turn a server to this type of oracle that gives us this information. When we have this oracle, what we can do is we try to modify the ciphertext in a way that we're going to talk about later. And we're going to get the answer. Does the decryption start with 0.2 or not? And then we're going to adaptively continue to modify the ciphertext, send more and more ciphertext, get more answers, more data. And in the end, we can use this information to try to actually decrypt the ciphertext and recover the plaintext. So this is our Blackenbacher work. And now we want to attack TLS. And when we try to attack things, first, we should start with defining our goals. And our goal when we attack in TLS, like probably in most goals in life, is to try to get cookies. And the cookies in TLS are keys that are stored in the browser that allows us to access the server without re-inputting the password each time we try to connect. They are stored inside the browser and they are sent in the beginning of each TLS connection. And that is the way that the browser identifies itself to the server. And the point is that if we are able to steal someone's cookies, we can just access the server and get all of the information out of the cloud. We don't actually need to break the connections and try to monitor all of the communication. So the attack scenario for us a key exchange is relatively simple. What we're going to do is we're going to try to sniff the TLS send check and the first message that is sent. We're going to use the Blackenbacher type of attack to decrypt the pre-mastered secret that is sent in the TLS send check. We're going to use it to decrypt the first message and we're going to get the cookie. And to be a little bit more concrete, let's assume that we have some bank that have very secure web servers that is stored on some cloud provider. And we have Mr. Smelly here that wants to connect to and check his bank account, he access the bank account, the cookie master sniffs the traffic and then is going to try to do the Blackenbacher attack. As we mentioned, we need to try to measure some side channels or micro-architectural side channels in this case. So we also assume that we have our own malicious code that runs, for example, on another VM but on the same hardware as the VM that the bank is using. And this code is able to measure micro-architectural side channels and using this information, we can do the Blackenbacher attack and retrieve the cookies. Okay, so this is relatively simple, but we are very greedy. As we mentioned before, we don't want to just get the 6% of connections, we want to get all of the cookies. And to do this, we exploit this vulnerability for a done-grade attack. And the nice thing is that this only requires the server to support RSA kick change. It works also on TLS 1.3 in the latest version and it does require an active man-in-the-middle attack. And the question is, so can we get the cookies? And one caveat that we have is that if we want to do a man-in-the-middle done-grade attack, we need to finish this attack in under 30 seconds because otherwise the TLS check is going to time out and no cookies are going to be sent over there. And the problem is that we need a very large number of queries and we only have time for about 600 queries before everything times out. So we don't get the cookies. So as a first iteration, let's try to look at the Firefox browser. And the Firefox browser, at least, I think it was fixed by now, had a very interesting property that we can prevent it from time out using something called TLS warning alerts. This is something that has been known for several years before. And this allows us to do this man-in-the-middle done-grade attack. Basically means we can keep this session alive during the time that it takes us to do the pending attack. And we're going to finish the... After we finish the pending attack, we're going to finish the TLS check with the decrypted master secret key that we have and we can probably get the cookie. But there is still one problem. And the problem is that the user might notice that it takes several minutes for a webpage to render and you're going to see that something fishy is going on. So we want to be able to do it covertly. So here we're going to use also a very known and old attack called the BISLIC attack. Basically means that we're going to use the fact that we can run JavaScript code inside the user's browser and this JavaScript code can repeatedly reopen TLS connection to any web server that he wants. And in those TLS connections, the same cook is going to be sent to the server. And this can be done without the knowledge of the users and it doesn't notice any kind of delay. And again, the session cook is sent and the nice thing is that we need just to break one connection and that's enough, we're going to get the cookies. Okay, so how this scenario, more complex scenario looks like, we still have our bank server and we have Mr. Smiley here, now he uses Firefox. And now he's going to access his web account and since he doesn't have any money and in his despair is going to try to look for interesting deals on the internet. For example, try to go to winbigprices.com and as we know, all of these sites are scams and we have malicious people behind it. In this case, we have the Cookie Monster. So the Cookie Monster is going to provide, Mr. Smiley here with some JavaScript that's going to try to reopen connections to the bank account and is going to do a man in the middle attack, uses the same section as that we mentioned before and in the end is going to get the cookies. Okay, so this is very nice but we actually, I personally really like Firefox, this is the browser I want and this is the browser I'm using myself so I want to try to attack the other browsers also. So to do it, we're going to try to do, to parallelize the downgrade attack and the problem again is that most browser will time out after 30 seconds and we still can use the interesting fact that many companies have multiple servers around the world and they usually reuse the same certificate over those multiple servers. So they reuse the same keys, the same RSA public and private keys and this is even true for companies that are certificate authorities by themselves and can simply generate more keys but this is something that's commonly done and now we can actually parallelize the attack because there's multiple servers, each server is going to be a separate Oracle and there's been quite a lot of previous work to how to try to parallelize this type of PDF Oracle attacks. So can we get a cookie? And the problem here is that no matter how we try to parallelize the attack, this is an adaptive chosen ciphertext attack and when we look at this, if we want to break 2048 bit RSA key, we still need at least 2048 sequential adaptive queries, there's no way to go around it and we only have time for 600. So we need to find something better to do. So to do it, we're going to try to give a little bit more background and we're going to look at maybe a simpler variant of this attack called a Manga attack and in this Manga attack, we assume that we have this following Oracle. If we give it a ciphertext, it's going to try to decrypt it and then it will check if the first byte is zero, if so it will return one, otherwise it will return zero. And now we're going to try to export the fact that RSA is a valuable encryption. What does it mean? It means that if we take any number S that we choose, we're going to raise it to the public exponent E and we're going to not see the laser pointer, but okay, this is the S times to the public exponent E, we're going to multiply it by the ciphertext. When we decrypt it, it's going to get decrypted to the original plain text M times the number S that we chose. And now we can start the attack with what we call a blinding phase, which will mean we're going to generate random S values, we're going to raise it to the public exponent E, multiply by the ciphertext and then send them to our Oracle. And if the Oracle returns one, it means that this M times S is a relatively small number. The top eight bits are zero. So if we have the whole search space, for example, for the possible values of M times S, and now we know that the top values are not possible anymore. So we reduced our search space. And now what we do in this type of attacks, we're going to iteratively continue to reduce the size of the possible interval. And in manual attack in the later stages, usually one bit or half of the space in each query. So what we're going to get is that after a certain number of queries, we know that M times S is inside some relatively small interval. And we can write it as M times S minus the start of the interval is some relatively small number R. So we can continue the attack. We're going to reduce the search space. And if we're able to continue it for the 2048 bit queries, we'll have a search space of one we're able to decrypt. If not, for example, if we have 600 queries, we're going to still greatly reduce the search space. We'll go down from 2048 to about 400 bits, which is much smaller, but still too large to have any meaningful information for us. So this is what we can get with our 600 queries. But now what we're going to do is we're going to assume that we can run this attack in parallel over multiple servers. So we get multiple such questions. And now what we can do is actually we can see that this is relatively similar to the well-known hidden number problem, which was mostly used for breaking discrete log base cryptography. It basically means that we can reduce this problem of finding the plain text M to the closest vector problem in the lattice. And here we're talking about a very small lattice. It's very easy to embed and solve it using the LLL algorithm. And what we've shown that we need about the simulation that we need about five servers in order to decrypt 2,048 bit RSA key using the manual oracle. And then we can get the cookie. Okay, so one thing that's important to know about our cookie like this, that this is not an actual improvement of the attack, this is a trade-off. The initial blending phase that I've talked about is actually the most expensive part of the attack if we look about bits of information per queries. And the parallel attack actually requires more queries than the original one. So why do we actually do it? And the reason is that this is a trade-off between the total number of queries and the number of sequential queries that we actually need. And this allows us to do this attack in under 30 seconds using the parallelization. So if you try to look at it, we have a very similar scenario to what we've seen before, but now we assume we have several server that we can attack in parallel. We have still Mr. Smiley here. We're going to take all of this information, put it inside the lattice, and in the end we're going to get our cookies. Okay, so to summarize our result, we show new novel technique using micro-architectural side channels to recreate those type of oracles in seven out of the nine implementations that we've seen. We provide the proof of concept for the attacks for both the manga and black and black type of attacks, and we show how we can parallelize this attack using the LLL algorithm. We had a very, very long and not very pleasant disclosure process. We had to get several very large companies to try to play nicely with each other, and some of them didn't want to play nicely. But in the end, we were able to get all of them to patch their codes, and most of them did it quite well. There's quite a lot of stories, but if you're interested, I will be more than happy to talk about it offline. And we also provided many different recommendations to how we can try to mitigate this type of attacks in our paper, but in the bottom line is we shouldn't use RSA key exchange. It failed us too many times. This is what we've seen is that it's not impossible to implement this type of cryptography with outside channels, but it's very close to impossible. And we simply should use better cryptography. But if someone really, really, really, really must use this, the most important suggestion that I have is, please separate your certificates. Don't reuse certificate between different servers, and even don't use reuse certificates between different versions of the TLS protocol. For example, use different RSA certificates for TLS 1.3 and all of the previous versions. And with that, I will be happy to take any questions. Thank you, Greg. Hi. Just for my clarification, the oracle is because, I mean, if the oracle is present, it's because it reports the error more quickly than it progresses with the computation if the padding was right. Now, this is a micro-architectural side channel. We have different type of oracles, but for example, it will access different areas in memory, depending if the validation is okay or not, or if the first byte is zero or not. This type of variance in code. That's very subtle. Thank you. Other questions? So you portrayed this as a man-in-the-middle attack. Does the client need to offer an RSA-based Cybersuite in its initial client hello in order for the attack to work? No. As it was previously shown, we can use the black and black attack of the text not only to decrypt Cypher text, but also to forge signatures. And so if, for example, the same certificate is reused between TLS 1.2 and 1.3, I can try to use the 1.2 server as an oracle to try to forge a signature and then use it to attack the TLS 1.3 protocol. Any other questions? Okay. So your attack requires JavaScript in the browser, multiple servers, and an attacker to be co-located in each of those servers in a parallel virtual machine. Yeah. Did you get any pushback from the vendors that you talked to about the realism of this attack scenario? I think that at this point of time, people understand that we could say attacks only get better and et cetera, et cetera. And I think that they were nicer than I expected. Most of them, I don't have any explanation. I feel that this attack model is really strong, but they were still enthusiastic enough to try to fix everything. Okay. I will buy you a lemonade later and you can tell me more. Okay. So let's thank Eyal again for his talk. And we'll get set up for the third talk now. Okay. I think these guys are good to go. Actually, we're not quite. No, they're not quite good to go. Sorry. Is this plugged in or are we using this one? Pro tip to anybody thinking of presenting RWC, never do a double header presentation. Always leads to complex day. Sorry. That sounds like I'm setting them up for failure. Okay. Great. So now we come to the third talk and it's about TLS2, about a system called Deco and the talk will be given jointly by Fan Xiang and Ari Uels. Thank you both. Thank you. I'm gonna talk today about Deco, tool for building decentralized Oracle systems. My name is Ari Uels. I'm gonna present jointly with Fan Xiang here who led the project and is incidentally on the academic job market if you're looking to hire strong security people. So as I said, I'm gonna talk about Deco. I'll begin by explaining what an Oracle is and then motivate the need for Deco. Fan then will talk about the construction of the system a bit. One of the key applications of Deco is to smart contracts. You've all heard of smart contracts of course and you've probably heard of them in connection with tokens. Smart contracts enable the creation and management of tokens, which you can think of as a kind of application specific cryptocurrency. And they were of course a key ingredient in the token mania of two years ago when prices were rising to stratospheric heights. Here you see the market cap of one token which I will not name. And it seemed that everyone got swept up in the mania including celebrities like Paris Hilton who was promoting a token, the heavyweight boxer Floyd Mayweather promoted a couple of tokens and renamed himself Floyd Crypto Mayweather. And of course a litter of new celebrities was created. In 2018, more money went into tokens into token sales than venture capitalists put into early stage internet startups just to give you some sense of the scale here. But of course if you know anything about tokens you know how all of this ended. It didn't end well. And part of the reason for this I would contend is a limitation in smart contracts themselves. The problem with smart contracts because of the fact that they run on top of consensus algorithms is that they lack internet connections. This is to say that they can't get data about the real world directly. This isn't a big problem if all smart contract is doing is managing tokens. It's essentially just doing some internal bookkeeping. But this means that smart contracts in cases where they can't access real world data really can't do anything interesting. Tokens are technically speaking not terribly interesting. Let me give you an example of this. One popular application of smart contracts that's trotted out again and again is to insurance. And in particular people talk about the fact that flight insurance in principle can be implemented entirely with a smart contract. The idea is very simple. If you wanna take out a policy you communicate with the smart contract you let it know you wanna take out a policy you send it some money and you've got your policy. Then if your flight is canceled or delayed the smart contract pays out automatically. No headaches with an insurance company dragging its feet refusing to pay and so on and so forth. So this is all well and good but the smart contract of course to realize this application needs to know whether or not your flight was canceled or delayed. How's it going to do this? The solution to this problem of smart contracts not being able themselves to reach out to servers to get data about flight delays and so on and so forth solution is what's essentially a type of middleware called an Oracle. It's software that runs off chain and whose job is to go fetch data from servers trustworthy servers, data sources and push them to smart contracts. Often what an Oracle will do is respond to a query sent by a smart contract on chain and off chain it will go fetch the data the smart contract is requesting. The concept behind an Oracle is very simple but realizing workable oracles involves solving two major problems. The first problem is a problem of integrity. I said the job of the Oracle is to go get data to push to a smart contract. How do we know that the Oracle didn't corrupt the data that it fetched or how do we know that just didn't cook up the data on its own? The usual solution to this problem is of course decentralization which is to say that a smart contract will communicate with multiple oracles simultaneously for instance three oracles and get the same piece of data from all of these different oracles. It then looks at the majority result returned by the oracles and then if one oracle happens to be corrupted the smart contract is still going to receive correct data. The second challenge though is the harder one and this is the one we focus on with Deco. This is the problem of dealing with private data. There are many types of private or personal data that users may want to relay to oracles to be sent to smart contracts. For example, a user may want to show that she's over 18 therefore to engage in a legally binding contract or she may want to show that she has a certain amount of money and is therefore eligible to participate in a token sale or she may want to show that her flight was delayed. If for instance she's purchased a flight insurance policy and it's important that the user show that her flight was delayed so that her flight information doesn't have to sit in the smart contract and therefore be visible to the whole world. Well for all of these different types of data there are trustworthy servers out there able to furnish them. For example, the fact that Alice is over 18 can be attested to by say the Social Security Administration. If Alice communicates with the SSA website she can find her birth date and it will be if she's communicating over TLS it will be securely transmitted to her. She has that guarantee. But how does she demonstrate to an oracle that the birth date she saw was 1985 and therefore that she's over 18? Here we encounter a fundamental problem. The problem is that TLS doesn't sign data. Using symmetric key crypto at the record layout so there's no way for Alice to prove to the oracle that she saw a particular birth date. She can only prove it to somebody who happens to be shoulder surfing. There are a couple of different current approaches to address this problem. The first, a natural one is to change TLS to sign data. This is the approach proposed in a very nice paper from ETH Zurich on a system called TLSN. It works well but the problem of course is that it requires adoption. Although there are movements of foot to get signing it to TLS it hasn't happened yet. The second possible approach is to use trusted hardware, trust execution environments like Intel SGX. My group has put forth a platform called Town Cryer that does exactly this. This works well but there are a couple of problems of this approach too. First, there's an extra trust assumption here. You have to trust the trusted hardware and the wake of Spectre, Meltdown, foreshadow, there are people who don't trust trusted hardware. A second problem is that trusted hardware is not always available in the sense that Intel historically has not allowed people to load arbitrary code into enclaves although I understand that's changing. So with this let me introduce the DECO protocol. DECO facilitates privacy preserving proofs about TLS data fetched by users to oracles and FAN will give you a more precise notion of what privacy preserving means than I've given thus far. So enables this proof to oracles and thereby to smart contracts. It doesn't require trusted hardware, doesn't have that limitation at Town Cryer does. It requires no server side modification. You can think of it as being transparent to HTTPS enabled servers and it works with modern TLS versions, 1.2 and 1.3. There are some superficial resemblance for instance to TLS notary and other such solutions. Those only work with TLS 1.1. With that let me hand the podium over to FAN. Thanks. Yeah, in the second part of the talk I will present how DECO protocol is constructed. So let's begin with the proper introduction to the players in the game. Here we have three parties in the DECO protocol. We have a TLS server running unmodified TLS. We have a prover as well as a verifier. And for blockchain applications the verifier is also called an oracle but I will use a verifier throughout the rest of the talk. As Ari has explained the primary purpose of DECO is to prove provenance or origin of TLS cipher text. Well, what does that mean? For example suppose the server is a bank and the prover which is also the TLS client had the following interaction with the server. So here the blue box is denoted TLS cipher text. The prover's goal would be to convince the verifier that this particular cipher text is indeed from the bank. And again here this is a TLS cipher text which is not signed by the server. And instead it uses MAC to protect integrity but since the prover knows the MAC key she can forge arbitrary cipher text. This is the challenge. And once the origin or the provenance of the cipher text is established the prover can either choose to simply decrypt the cipher text or prove statements about the plain text in zero knowledge without revealing the content. For example to prove that her balance is greater than a threshold without revealing the exact balance. The main idea of DECO is to hide the MAC key from the prover until she commits to the cipher text that she wishes to prove provenance about. This is achieved in DECO using a protocol we call three-party handshake. If you wonder how would the three people go ahead and shake their hands, shake each other's hands we do provide the little visual aid. So for now I'm assuming CBCH MAC for the moment but I will talk about another popular cipher suite later. In nutshell at the end of a three-party handshake the client and the server will end up having the same encryption key kinc and just as URUTL is handshake but the MAC key will be secret shared between the prover and the verifier. This is the main idea behind three-party handshake. And the three-party handshake is the first step of DECO. Here is the overview of the protocol flow of DECO. In the first phase in a three-party handshake shared keys are generated and in the second phase the prover will go ahead and interact with the server as a URUTL's client query the server and the receiving a response. But note that at this point the prover still don't have the full MAC key to verify the integrity of the response. But that is fine because the prover will get a chance to do it later in the protocol. Then the protocol proceed to the third phase, proof generation. As the prover send over send to the verifier the response to commit to it and gets back the other piece of the MAC key. Right now the prover has the full MAC key the prover can verify the integrity of the response and then choose to either decrypt it or prove statements in zero knowledge according to the specific requirement of the application. This is the high level flow of DECO protocol. So I will in the rest of the talk I will talk about each step briefly. The first let's take a closer look at how a three-party handshake actually works. We already know what to expect in the end but how does that really work? The three-party handshake is based on the two-party standard TLS handshake, right? Which has two steps, key exchange and the key derivation. So we'll talk about what they are in the next slide but the challenge here is we need to shoehorn in this third party in a way that is completely transparent to the server. You order to do this, we leverage the homomorphic properties of key exchange for the first step and we resort to secure two-party computation for the second step. The first step is, as I said, key exchange where the client and the server establish exchange or the session security using protocols such as Diffie-Hellman. And there are other key exchange protocols available but the elliptic curve-based Diffie-Hellman is the recommended algorithm to use. Therefore, it's the focus of the paper. In a three-party handshake, you can think of the prover as well as a verifier being two clients with independent Diffie-Hellman public keys. But in order for this to remain transparent to the server, the prover combines the two public keys into one and use that to finish the standard Diffie-Hellman key exchange with the server. Then all three parties will compute their Diffie-Hellman values just as before by raising the peer's public key to the power of their private key. But now you can verify that. The prover as well as a verifier will end up with the secret sharing of the session secret Z had by the server. Here Zp times Zv is Z, which is the session secret had by the server. And note that here these values are points on the elliptic curve because, again, because we focus on the recommended elliptic curve version of the ECDSA, sorry, ECDH. Well, now that the session secret is derived in the desired form, the second step is to derive a bunch of key by running this section secret through a PRF. So remember the server runs standard TLS, so that's what the server will do. We have no control over server's behavior. Therefore, the prover and the verifier need to do the same, to get the same set of keys. However, they can't do it directly because they can't give each other their shares of D. Naturally, a solution is to use secure two-party computation to compute on their private input. It seems like we just need to construct the circuit that takes in two numbers, adds them up on elliptic curve, and run it without through a PRF, and we are done. However, this simple approach isn't quite sufficient because two-piece protocols are usually optimized for either binary operations or arithmetic circuits. But here, we have both type of computations involved. The first step is indeed arithmetic, and the second step is HMAC-based PRF, which is binary operation. So that means a direct implementation of this in using generic 2PC will only be suboptimal because the optimized 2PC works better with a single type of circuit. Therefore, we need to apply several optimizations. The first way we need to, we essentially move the first step outside of the circuit by having a custom protocol, 2PC protocol, based on additively homomorphic encryption. Then this means we can now work with a pure binary circuit. So secondly, we hand-optimized the binary circuit. We started with efficient chartool circuit from previous work, and added functionalities to make a PRF out of it, and hand-optimized it to reduce the size. The outcome of all of this is a handshake circuit with the AND complexity of 7070K. And concretely, the runtime of the handshake protocol, three-party handshake, is about 1.4 seconds in a line setting and about 5.7 seconds in a line setting. Although this is not blazingly fast, but sufficient for our purpose because we envision DECO being used periodically in most of the natural applications of DECO. So far, we've been talking about CBCH Mac, but DECO sports GCM as well. Essentially, the handshake for GCM is essentially the same as what you saw with the number of differences. One of the important differences, since GCM ciphertext itself, by itself, is not a commitment, we need to add a key commitment to the handshake process. And also the GCM key is shorter, there's only one key for each direction. So the key derivation is slightly different. But overall, these changes should have small to no impact on the performance of the handshake. And since GCM is used in both version 1.2 and as well as the latest, the TLS 1.3, by sporting it, DECO works with more than TLS versions. Okay, let's zoom out and take another look at the big picture. We've been talking about the first step. How can we generate shared keys? And after that, the second step for most applications is just a Euro TLS session that, so I will skip that and talk about the final phase. Well, the task here is, now that the provers can prove provenance about ciphertext, what this means is we can essentially treat the ciphertext coming from the server as a commitment. Now the prover has multiple options as to what to do with that commitment. The simplest thing to do is to just decrypt the whole thing to open the commitment entirely. Well, although this completely forego privacy in the response, but this still proves provenance. So it still would be useful for some application. Or the prover could, as I said, approve statement about the plaintext in their knowledge. Of course, a general generic zero knowledge proof for a large ciphertext would be expensive, but there are still interesting operations that we can do. For example, in the paper, we propose the several ways that to allow users to decrypt the ciphertext partially. And for example, by leveraging the record structure in TLS, we can do record level and the block level of selective opening pretty efficiently. Well, another example of what provers may do is to combine the selective opening with other bells and the whistles, such as to prove a statement about part of the plaintext, which you release a lot shorter than the full plaintext, which hopefully will give you more efficient zero knowledge proofs. Such as the age is, such as her age is over 18, or her balance is greater than a threshold. We implemented actually both of these examples in the paper. The cost to generate these proofs is application-specific. Depends totally on what statement you want to prove and how complex that is. I will give you an example here, and we have more data in the paper. In the age proof application, we implemented the prover proofs that her age is over 18, according to the data from a university registrar website. The proof involves parsing down, getting the ciphertext to open part of it and parsing some of the string and do a range proof. So all of this can be done in about four seconds using Lipsnark. Again, although this is not blazingly fast, but for theoretical and infrequent of age verification applications, we think this is suitable and perfectly fine, especially given the cryptographic guarantees we get from this type of proofs. And to summarize here, Deco essentially allows users to export their private data to with integrity guarantee to others without server's help. And you heard about the blockchain applications from RE. There are also non-blockchain applications, such as age proof, and you can also think of others, such as proof of ownership of online accounts, proof of integrity of personal data to enable marketplace of that, et cetera. So that concludes my part of the talk. Deco, in summary, Deco is a privacy-preserving oracle protocol that allows you to prove statement about your TLS connection with a server. It works with more than TLS versions, requiring no trusty hardware, and requires no server-side modification. I highly recommend you go visiting our website. Deco.org works. We do have a blog post there with more information, and our paper is also posted online. And with that, I will conclude the talk. Thank you. Happy to take questions. Thank you. We do have time for some questions. We'll start here with Daniel. Yeah, you did this two-PC PRF valuation. What did you use for that? Was it passive, active, secure, or technology? Yeah, we used a malicious secure Garbled Circuit-based two-PC from CCS-17. So it seems as though the query is just directly encrypted by the client. How do you avoid issues where the response, like take the age verification example, what if the registrar's page doesn't include the student's name? That's a great question. For simplicity here, I assume the response has the information for the query. But actually, you can do similar things for the query. You can generate the query in two-PC. We discussed this in the paper. So this might be very closely related to what you just said about proving the content of the query. It seems like, given the focus on unmodified servers, currently the servers don't really expect that the strings that they send back will be presented to a third party, like out of context. So it seems like that might be a strict requirement that the verifier understands the context of what strings may be sent by the server. That's a very good question. We actually discussed this property which we call contextual integrity to prove not only that this is a substring in the plain text, but the substring appeared in the expected location. For popular data formats, such as HTML and JSON, actually there's a way to do it rigorously by parsing part of the data. But yeah, that's a great question. We do have a discussion in the paper. And for the, yeah, I'm more than happy to discuss it offline. Thanks. Could you quickly go to the slide where you do the key exchange that, right here. Oh no, sorry, the previous one. Thank you. Is there any chance that, so how much trust is put on the verifier? I'm wondering if they could choose a value of xv that is weak somehow, like, I don't know, the group order divided by two words. I'm a arithmetic that I cannot come up with on the spot. And use it to recover the shared secretive and though they're not supposed to. We, it's certainly, here's a note that the verifier choose this value independently without seeing the prover's value. So one attack we considered is can the prover, for example, by seeing the verifier's choice to choose a key that is somehow correlated to the verifier's choice. So we, the thing is, if the prover choose the key dishonestly, later in the later stage of the protocol, the two-PC, the zero-notch proof part will break, the prover wouldn't be able to generate a sound proof. Yeah, I don't know if that. But I'm wondering if the verifier is dishonest to try to either not recover ciphertext, sorry plain text. Right, so both prover and verifier can be actively malicious. And the verifier, if verifier choose a weak key, it is only to the disadvantage of the verifier. Because that would allow the prover to somehow, maybe circumvent the integrity. Right, I mean the two provers together are acting as a joint client. That's the important point here. Otherwise, it just looks like ordinary. Yeah, I was just wondering how much they trust each other. They don't. I have a question about the fact that the prover doesn't have the verifying Mac key. Then you have a counter-mode encryption. And the beginning of the transaction between the prover and the server, and a malicious adversary in the middle between them, can modify whatever the server sends. And this might enable him to attack the prover even before we start this part of the protocol. Yeah, that's a great point. Here, I think I made a slight simplification by assuming there's only one response from the server. If the prover and the server talks back and forth, this is, I would say, a less common scenario for Deco to be used. But in that case, yeah, in the middle of the session, this prover do need to run some additional 2PC to verify Mac tags before it proceed. Yeah. Okay, thank you. If you use it as an oracle in smart contracts, did you try to estimate gas costs? In particular, compared to, I know, raping everything into a snark, or you rap everything into a snark right here? I think the envision to use is to prove the facts to the oracle, and oracle will issue a message. It will, for example, sign on the fact and send it to the smart contract. So from a smart contract point of view, there will not be a snark involved. Right, so we proved facts to the oracle, and yeah. A smart contract can't participate in MPC because it has no secret state. Or maybe emulate. Okay, thank you. Is it quick? Yeah, it's quick. In the MPC, you break it into two parts. You're using malicious security, but what prevents the parties from importing incorrect input into the second part? I think the simple answer to that is, so there are two ways to deal with it. The one is actually the protocol as is is still secure because this will be caught later on in the stage. Although the MPC will perhaps output junk and it won't be detected, but later on in the proof stage, the proof will fail because the prover don't have. So you take care of it later on. It's taking care of it. But there's also like a slight tweak to get rid of this completely by having the two parties to commit to their key share and the proof knowledge about it. Yeah, but even the protocol as is without the additional tweak is already used with that. Okay, thanks. Okay, thank you everybody for the great questions for that talk. Thank you to the speakers, and it's now time for coffee. Thank you.