 Okay, thank you for your introduction. Thank you for your introduction. This one is more fast, LEGO-based, skill-complete, and critical, because it actually has a lot of hatches. We've done with a little bit, but I think it's a good idea. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Is there a way because you cannot assume what a level 3 can or cannot do? So much reason for our technology to use this formula model. In a formula model, that is a kind of an engine. You want this, no exception under this behavior. And this is the model, this word comes up. The data that we're not going to go against to enforce only behavior is still kind of a choose. say you have a genie, a lot of copies of gulliverties, then you should stop them to check and then we can do the right thing first. If something goes wrong, it doesn't really matter if there is any error in the genie check piece, the right things that you can do are possibly all correct. And the most advanced country's idea is the copy batch card. In a single institution's engine, the institution of a competition is viewed as traversing all the gap gates, all gates in a topological order. And in the batch schedules model, the gulliver first generates a lot of gates for the condition. Then the two parties jointly use the quantitative protocol to decide which of the gates should be checked. Then they also jointly use the quantitative to pass the outside gates into these buckets, and then do some wide-solid salary technique to combine all this stuff together, then do the evaluation. There are lots of work, employees' idea to do the single execution combination. There are also other works that send this idea to execute a single function many times in parallel. And many of our work, all these work, is we develop a method to stop in one of the exorbitant hatches, and we can achieve to one single good gate in every bucket can ensure security. There are other works required to correct this majority. And it also improves the benefit of the second rate from one loss to one total to half, and we can improve this rate to one, some different setting. And the last way we have proved that for a database protocol, no matter how the circuit is, we need at least two gates per bucket, or two gates per two gates. This is a little bit of a limit. Before that, people always think that we have an over and over and advantage. In my thought, I will only focus on the first video. We had read my paper to see the multi-solid, but that will do with us. Let's first talk about wide-solidary. At some point of the protocol, the inventor, I've been writing some capital gates in a bucket. He will get a wide label, WAS, where S is the semantic value of this wide label. And he has to transmit this X to the other label of the whole bucket. How can he do this? If everyone is honest and we have set up such that all the wide labels are at the XOR of their label and the public, not public, but global data, then we can cover who knows everything to see the differences of their labels to the better, then the better can complete the correct of the wide label and not the wide of the bucket. But the things that we are facing are active, but there was three. The challenge is, what can we ensure the quraniness of this difference? If we have an XOR homomorphic half-gain, we have such a solution, we can hash the Z-Label on YA and YB as their label. And as a galveston, this hash of the wide labels chooses the better. Because these hashes are homomorphic, the better can easily complete the hash of the difference and think about trying some difference. Does this solve the problem? Actually no. Because the better can now verify the wide label of the X against the hash of the Z-Label, then he can determine whether X is there or not. So the government knows about this. So our second attempt is to introduce a communication field. We don't hash the Z-Lables. Instead we hash a random label, the P, A, and Y, A, and P, B, and Y, B. And with all the hashes, the random these P, A, and P, B. And we also have two hashes of global data. In this case, we let the government to send all these hashes to the developer before the soldering sprays. Then if two hashes are to be soldered, we let the developer to send the difference of this from here. And the developer can verify the X against the hashes, know that the hash is still homomorphic. Then we can let the developer to send the differences of the washes. And we even can also verify the claiminess. And this technique can also be used to soldering washes with different data. Previously I talked about the washes of the same data. Here we're talking about different data. We have data 1, data A, and data Q, and data B. Now now we don't have a global data. Instead, we have a data hash and data Q hash. We let the scalpers send the hashes to the developer and have two differences to finish the soldering. The eventually we can verify the claiminess of this difference and obtain a use proper difference to obtain the white label on the white page. And now you may wonder why we want to take all this equity to soldering washes with different data. And the main benefit is that this allow advocate to use a local version of different data instead of a global data. This can allow that we fully open a data by revealing his local data. And this data has a whole variable. And this leads to a check in with data. And this is very important for you to kind of choose larger components, which might require you to fully open a common to check. Now, as all these results are based on such an idea, such a morphcache, we have to look at this interactive hash or hashes to realize this idea. I also like to look at the idea of a hash. In IHBergo, we have a standard and a receiver and a functionality. The standard by the message to the idea of functionality, that the parameters in the receiver are hash of the message. And this message can be derived from this message from some public, not publicly known algorithm, as well as the level of the hash is shorter than the level of the message itself. And this has to have some properties. First, it has to be exon-morphic. It's to say the exon of the hashes is the hash of the exon, the hashes of the community. Second, it has to be binding or it's not hasher at all. Then it's to say two different messages that are supposed to have different packages with a very high probability. And this has to be hiding. Let me say it's a hash is still has to be shorter than the message. Such that the receiver cannot learn the entire message. The receiver may list some information for another entire message. And now let's come to a realization. In our protocol, the message is a simple vector of this L. And we have to fit this vector into a new domain coding. You send this to M. Let's fit this encoding into an OT protocol. You still may not only watch some of this vector, but it's an OT protocol. The watch symbol covers up this hash of the message. And let's look at the scripted intuition of this design. If there's a difference in the original message and two different messages, then it might be some, there'll be a lot of difference in the encoding. This is due to the property of the original encoding. Then with very high probability, the receiver will see some of these differences. That's due to that the different message has different hashes. And if the receiver doesn't watch these things, we have such a different grid. But the receiver cannot watch too much, too many implications. Otherwise, this is a technical and linear original encoding. If we watch too much, it will learn the entire message. And even if we don't want to watch too much, we can still set up the parameters just that the binary property can ensure that it will be very small to know. Now, you may wonder that we have to use open up the OT and we might be very sensitive to the message. But that's not actually the case. There is only one message for calling a number of, calling a number of messages. Actually, without loss of generation, let's look at how we hash random messages. The sender first pick a number of Cs and Cs, then these PRCs use 10 symbols, TKC, YT, N, then he use the first L symbol a vector fitting to a systematic written encoding to extend to a length of N. Then he takes the first L symbols as a message, it's a random message. Then he places these Cs into our OT and the receiver watches some of these things. Just like the standard side, the receiver also is PRG with 10 of these Cs, two symbols with a length and number of symbols. To obtain, to obtain a hash, the receiver first preserve the symbols at the first L position, then the sender computes an extra correction between the last L symbols from random people on the PRG and the last L symbols from the encoding, and he has an extra correction that's placed in this extra correction to the receiver. The receiver then applies this correction to the last L symbols from PRG, then he gets the whole hash of the message. This double of the OT, where only is once, the whole time, and the rest of the parts will only send an extra hash for communication for a random message. Like the main protocol security, in our protocol as I've mentioned, we only have to guarantee one correct gate per bucket to ensure security, not like other protocols indeed are majority correctness. In our protocol, a bucket is so called good as long as it's contained at least one correctly generated capital page. Before we proceed, I think, just the concept of better labels. In Showa, there are better labels. The government knows the labels, the government knows which label could be better on the event, because the better doesn't know. So how can the inventor know whether a label has an online place, and a divination, and if they see a label in the bag, because there are hatches on labels, on ones. It can come to the hash of their label, and a white label from the hash on ones, so he doesn't know which has a zero or one, but he can get a fixed set. Then a white label is called value, if the directory can match the hash. Otherwise, the value is called invalid, because if a white label cannot match any of the hatches, it must be a correct label, and come from an incorrect date, and this white label will be discanted for Showa. Now let's see why we can get this duty, if there is only one gate to migrate. Because we have already got all the garbage labels, then we must fix it. We'll show you one of two cases. The first case is that we've got a lot of white labels, then they are the same. Most of them will already have a correct label, a correct date. We have a correct label, and all the labels are the same, so all the labels are correct, then we are sure, okay? So I don't know the case, we don't have one single value, we have two different values, they have a set of white labels, and some of them are this, and some of them are that. Well, because all the garbage labels are discanted, this label remains either their label or one label, so they are excellent data. Now the question is that can we extract some cheaters' input from this data? If we have two, then, okay, I have my input, I have everything, chase a type of cheater, I have everything. So the question is, can we do this? Yes. Those cheaters' input, we have WAX, that means input, and we have patches, and the only thing we need to know is that you guys need to know straight X is a combination grid. So if we can enforce the cheater to generate all the combination bits with this data, then if the divider can get the data, it can get the combination bits, even if the combination bits get the X, then we are good. Okay, how can we do this? We do this from a dynamic proof, we get the governor who uses data to generate the states he or be used in iHash. What kind of this? From a dynamic proof from iHash and then having it states, and then when that doesn't happen, also hash it to states, and send them to the divider or the receiver of iHash. Then the receiver also obtains the states, if you say hash of the state from the sender directly, then he obtains states from this from the out of the OT in iHash, then the states the generator has, but we use the generator with the combination bits, PAPB, blah blah blah, then the states the receiver has, we use the generator with the hash of the combination bits, PAPB, blah blah. Then the generator needs to prove that, okay, iHash's new hashes, these are hashes, I really come from states, and the states really come from a data, but if the receiver helps you verify that the hashes, the sender needs matches with the states that are obtained from the from the iHash, that the, oh we got the OT from iHash, even all the way from states, they really believe that we can see that, okay, the generator really uses data between the states, between the the combination bits, and we can see that if the receiver obtains this data, then the receiver will be able to spread out the combination bits, and with the right label, the generator will be able to spread all the inputs of the cheater, and all these proceed, they are expensive, really expensive, but they only take one single student, no matter how the, how the size of the competition, they have one single student during the setup setting, during the single, during the setup phase, and there's another challenge about in practice, because we all know that 8.9 is a very powerful method and it runs 10 times faster than 1096, but it has an elimination, it only works on 192 bits inputs, however, if we want to achieve 192 bits security, our one label that reminds us a vector of symbols, it's less than 432 bits, so we cannot directly fit them into the ASMI, our solution is to compress it into a vector, and today it fits stronger labels than fit into ASI, and how can we compress them? For a one label of for L symbols, it has omega symbols of the coding, of the public with the name coding, I would leave in the IH, though we don't know, but we don't know where our disease is, and there are only L and omega symbols on the main, then we have to use our randomly, random 4 grand matrix T to multiply with this vector, the negative result, the result is kappa zero, and it lengths to 180 bits, surprisingly, with our answers, the entropy in the original method is 180 bits, but the entropy, in which the entropy in the result of this multiplication is smaller than 127.99, you can end up 5 or 10 bytes here, it says as you have lost almost nothing from this compress, and this idea is very similar to the leftover hash lemma, but if you apply this leftover hash lemma directly in the backbox way, then the result is presently, is presently bad, we have we have sort of like 80 more bits than Japanese lemma, and this is our result of our work in some state applications, basically this WMK protocol, which runs the CWF, is slightly faster than us, in most of the settings, but has its own elimination, why is that this WMK protocol is very kind of slow, maybe 100 times slower than our work when accessing the input wires, the other elimination is that this WMK protocol cannot handle reactive competition, such as O-DRAP or PORGRAP, which I opened the last day, PORGRAP something like that, and our protocol is always almost faster than the best previous decos that protocol settings on DES, I think, when, this DES is a very short input with very relatively huge competition, which we cannot attribute to our advancing input wire processing, besides our value is slightly larger, and this becomes a bottleneck, and for some special applications, such as AES, there's another alternative we can do in decos that protocol, wire soldering is much, much more intensive than that one goes, and if we can use large components, for example, addition and mass, or supplies, as the country's academic unit, then we can skip the soldering for the other internal wires, which can be very frequency, and this typically needs to be fully open of a couple components, and our technique of soldering different wires with different dilemmas can standardize all its needs, and so our protocol is already quite efficient if we will survive as a country to pack an academic unit instead of any case, we still get a double ohang speed up 100% and save half of the bandwidth, and our conditioning is available until half, and I have to mention that we have much more scalable ohang, extreme scalable go-up for work based on this work, thank you, I'm really getting questions, thank you. When you have these different deltas, are you losing the bricks or property, or through wire soldering, can you have something like lexor? Yeah, we can still have three extra, basically, I can do, say we have, if you have different layers, you will have all the wires on the bucket, so you still have three extra, because extra is down on battery level, but I think it's on data level, so you still have the bricks or what's the extra? Extra cost, basically, you have extra extra information for wire up. Well, question? So what's the security properties of your hash functions? This basically is a hash function, it only works on two parties, and you let me, I'm also going to be interactive, and it's not like normal hash, it doesn't have a hiding, it doesn't hide all of these in your message, it only hides a random position, so it only works on written message, but you use extension with your hash with just... No, I don't, we don't need a shashash, I'm very busy, we just use that. Random one, it stands for hiding, don't have a computational hiding. No question? No question, let's thank the speaker again. This is the end of this session, thanks all the speakers for the last presentation, and thanks all of you for being here today. Enjoy your coffee break.