 Today I'll talk about conditional disclosure of secret where nonlinear reconstruction. I'm Tianyuan. This is joint work with Vinod and Hotec. So you have heard the talk about conditional disclosure of secret or CDS already. Here's a quick recap of the definition. There are three parties. Alice, who holds the predicate. Bob, who holds the input i. And Charlie, who knows both FNI. Alice and Bob also knows a bit secret. And they want to disclose the secret to Charlie if and only if FI equals 1. So this model is like multi-party computation but with minimum communication. Alice and Bob cannot talk to each other, but they share a random tape. This random tape is private in the sense that Charlie cannot read it. Alice and Bob also each send a message to Charlie. They cannot receive any message from Charlie. After this, Charlie should be able to learn a secret if FI equals 1. For privacy, when FI equals 0, Charlie should learn nothing about the secret. We are aiming for information theoretical security. Therefore, when FI equals 0, the joint distribution of the message should be simulatable by a simulator who doesn't know the secret. So far, it's the same as the previous talk. But in this one, we stick to the case where the Alice input, you should consider that as a truth table. So Charlie learned the secret if and only if the i's index in the truth table is 1. CDS is very useful. In the paper by Gertner, Isshai, Kuchilevis, and Mailkin, it was used to protect data privacy in peer schemes. It also implies secret sharing of a bidding graph. And it also looks like one-time attribute-based encryption, and it can be lifted to a multi-time attribute-based encryption. It's also an interesting primitive on its own. So let's have a toy construction to say how it works. Here, the private random tape, we read n bit of random bit. Bob, he sent the secret bit, x or the s random bit. This is like a one-time pad. Then what Alice do is natural. He sent all the random bit rj such that di equals 1, a dj equals 1. So the secret is one-time padded by ri, and ri is leaked by Alice if and only if di equals 1. So we have both privacy and the correctness. In this scheme, the Bob's communication is minimized to one bit. Alice's communication is at most n bit. When di x1, Bob reconstructs the secret by a very simple way. He just x or the bits sent by Bob and one bit sent by Alice. We have another toy construction. This one, Alice's communication complexity, it's minimized to one bit. Alice sent the secret x or with all the random bit rj such that a dj is 0. And for Bob, Bob sent all the random bit, except ri. So you can see here, there's only one unsent secret bit, which is ri. Ri is used to encrypt the secret if and only if di is 0. Therefore, we also have both privacy and correctness. In this case, when Charlie wanted to reconstruct the secret when di is 1, he also do something very simple. He took the bit sent by Alice, x or with some of the bits sent by Bob. So in the previous two examples, Charlie reconstructed the secret by x or some of the bit he received, which is a linear function mode 2. We said a CDS have linear reconstruction. If Bob reconstructed the secret by apply a linear function on the message he received. Now already, you have seen two constructions for CDS. With linear reconstruction, the communication complexity with n1 and 1n. You can do a balancing between them and get another CDS, also linear reconstruction. But the communication complexity is square root n. This is also the best known before us. OK. Gay, Crenidus, and V, they prove that if the reconstruction is linear, square root n is actually the best you can get. In the same paper, they show a log n lower bound for unrestricted reconstruction. So as you can see, there is an exponential gap between the lower bound and the best reconstruction. If you want to improve it and get a better CDS, you need the reconstruction to use some nonlinear technique. So here comes our works. First, we present a CDS with cube root n reconstruction, a cube root n communication complexity. And the quadratic reconstruction. This is also tied in the sense that cube root n is optimal for quadratic reconstruction. We get another construction such that, OK, this time we have sub-pollinomial communication complexity. But as the trade-off, the reconstruction is more complicated. So you should get the idea now that, OK, we need nonlinear techniques. But where are they coming from? It turns out, like the correct place to looking for nonlinear technique is from two-server information theoretical private information retrieval. So in two-server peer, there are two-server, each hold a table. And the client hold the index. The client want to learn the S index in the table, without leaking his index. OK, you can see it's somehow similar to CDS. There are three parties. There are information theoretical. There is a database, there is the index. And also, there is a square root n communication construction. Construction, yeah. But one big difference is that we have a better scheme for peer. For peer, we have a cube root n communication complexity scheme. And also, recently, the Vorgopi, they construct the first subpolynomial communication complexity peer. So natural question is that, OK, can we import a technique from peer to CDS? OK. Beemer, Eschei, Khmerism, Kushilevis, they do something similar. They import technique from four-server peer to PSM. PSM is kind of similar to CDS. Encouraged by previous result, we also import techniques from peer. We construct CDS, these two with cube root n communication capacity and subpolynomial communication capacity. Actually, we have a quite general transformation from two-server peer to CDS. As long as the peer satisfies some mailed properties. Let's see. First, a quick definition of peer. Two-server, a client. Client knows the index. Client generate query and send to server. Server answers. Client get the answers. And the client learns the essential. For privacy, each server individually learns nothing about the index. OK. So we would like gradually transfer this picture to a CDS. The first thing I'd like to do is to split the client into two pieces. The one on the top right, he only taking care of generating queries and send to server. The one on the bottom left, he get the answer from the server and he output the eyes index. So the first property we assume on the peer is that the query actually forms an additive secret sharing of a vector UI. Here UI to UN is just some public, everyone know, a vector. OK. The server answered the query. I don't care how he do it. Let HD denote how the server answered the query. Our second assumption is the client get the S entry by doing some linear work. In particular, assuming you are the client, you get the answer, which is a vector on its own. You're inner product it with UI. You subtract the two inner product, the difference would be DI. So you may ask, oh, this looks quite specialized. But actually it's not. As long as you have a peer where the client output S entry by doing something linear on the answer he received, you can transform this peer to a peer with this equation. The total communication complexity would increase a bit, but only by constant factor. Assuming we have both these properties, we are ready to transforming this peer to a CDS scheme. First thing in CDS, there's a secret bit. We need to embed it somewhere. We give the bit to the top client. So the client would follow. If the bit is 1, nothing changes. If the bit is 0, he actually sends same query to both server. Everyone else in this graph, they just pretend nothing has happened and do the same thing. So what do the bottom client output here? It's actually output S times DI. When the secret bit S equals 1, he just output DI as previously. When secret bit is 0, he's actually subtracting two identical inner project. Therefore, the output is 0. So putting them together, the output is S times DI. S times DI is actually what we need for CDS. Assuming you are Charlie, you get S times DI. What does it tell you? When DI is x1, it gave you the secret. When DI is 0, it hides the secret. OK, now it's my favorite part. So the top server is actually Alice, knowing the database. Good. The top client is actually Bob, knowing index, secret. Good. There's no need to sign the random string to Alice. They can simply read it from their private random string. Both client and server in the bottom are Charlie. So let's merge to Charlie. This is how the picture looks like. Let's check the CDS schemes already. Is it correct? It is. From Alice's message, you can compute the blue inner product. From the Bob's message, you can compute the green inner product. Subtract them, get STI, which is what Charlie should learn. OK, so it is private. Assume you are the simulator. You want to simulate the Charlie's view without knowing secret S. OK, Bob's message, cool. It's simulatable. It's just a random string. But you don't know how to simulate Alice's one. Actually, Alice's message alone, you can simulate it because it doesn't even contain S. But the joint distribution is hard. But now let's have a daydream. Alice's message is used to compute this inner product. What if Alice just sent this inner product? It's just one bit. And actually, when di equals 0, from this equation, you can compute what Alice would are going to send. So this is actually simulatable. But very sadly, Alice cannot send this inner product because Alice cannot compute it. Alice doesn't know SI, doesn't know UI. Bob knows UI. So Bob can help. So the current situation is like Alice knows a vector. Bob knows a vector. They want Charlie to learn the inner product without leaking extra information. It's doable. It's actually very simple. It's actually a very simple protocol for this. Alice sent a one-time pad of his hurray vector. And Bob sent the inner product between the pad and his vector. What does Charlie do? Charlie actually also knows this Bob's vector. So Charlie do an inner product between UI and what Alice sent, subtract what Bob sent. Then Charlie would get this inner product. So we still have correctness. So how about privacy? Alice's message is easy to simulate. It's just random string. And again, the extra bit sent by Bob, you can compute it from the equation at the bottom. So actually, you also have privacy. So now, actually, you have the transformation already. But in this transformation, we assume two properties, which is listed in the top of the slides. I want to make some comment here. The first one, we actually don't need the first property. Without this property, the transformation would look ugly, but it still survives. The second thing, name odd peer to server peer, you know. They satisfy both property in the top. Therefore, they can be transferred to CDS with same communication complexity. And the one more thing is like, if you remember, like Charlie, he acted boasting as the server and the client at the bottom. Therefore, his complexity is at least as much as the server in the peer. So for Kube root and peer, the server's complexity, the server need to compute a quadratic function. So when it transformed to CDS, like Charlie need to do quadratic evaluation. And similarly, in a subpolynomial peer, the server do something complicated, high degree. Then when transferred to CDS, Charlie also need to evaluate a complicated function. So this is, OK, let's have a concrete one. Let's see how we transform the subpolynomial peer to CDS. So what does the vector UI looks like? The vector UI is actually just called matching vector family. It's a bunch of short vectors such that inner product vector itself is zero. Inner product vector with some other vector is none zero. And if you can find better, like shorter matching vectors, then you would have better peer and also get better CDS. And the server, OK, what I want you to observe from this one is this is what the server does in the subpolynomial peer. And also this is what the Charlie need to evaluate. He need to compute the inner product on exponent, which is very high degree polynomial if it's not written on the exponent. So this is not technical part I want to say. To summary, we get two new CDS schemes. One with cube-ridden communication capacity, which also tied for quadratic reconstruction. One with subpolynomial communication capacity. The technique borrowed from peer. If you have a new peer, it's very likely you can also transform it to a better CDS. And if you know forbidden graph, our result also implies better secret sharing schemes on forbidden graph. So this is how the picture looks like. There used to be an exponential gap here. We somehow narrowed it down, but there's still room left. And big question is, what's the right answer? Thank you very much.