 אני רוצה להתחיל עם פרובלם אני רוצה להתחיל עם פרובלם ואנחנו נראה לך איזה שזה קריפטוגרפי אז, בזה פרובלם אליסנבוב או שפיישיפים יעדים על סיימה ובספר ובספר יעדים למה Their goal is to stop on the same cell or to meet מה אליסנבוב לא יכולה זה שזה לא יכולה להסתכל והם לא יודעים איך אחד מהם הוא לדבר של המחשב מה אליסנבוב יכולה לעשות שבספר ובספר יעדים יעדים על סיימה ובספר ובספר ובספר ובספר So again Their goal is to stop on the same cell It can be shown that no matter what strategy אליסנבוב פולא They cannot guarantee to stop on the same cell So the spaceships problem is What strategy should אליסנבוב פולא In order for them to maximize Their probability to meet Or to synchronize So this is the spaceship problem And let us now View a simple solution for the spaceships problem So in this solution Each of the spaceships Read T consecutive cells Starting from the starting Starting from the arrival point And After reading all these cells They Go back and stop on the minimal value They encountered So this is the algorithm they use And let us now analyze this algorithm So since אליסנבוב Start on adjacent cells The only way they will not Find the same minimum If the minimum Is on one of the cells Only one of them read There are only two There are only two cells Which only one of them read Which are these two And this means that The probability for אליסנבוב To not synchronize is about Two over T So this is a basic solution For the spaceships problem And let us now The original motivation for the spaceships problem So this is the homomorphic secret sharing And this is a concept introduced By By Two years ago And it is an alternative to fully homomorphic encryption In the very high level The difference between HSS and FHG Is that HSS is more efficient However, it is less functional So the first problem Homomorphic secret sharing solves The problem of Of securely outsourcing A heavy computational task So indeed, supposedly We want to compute some public function F on a secret input X What HSS does Is to split X into two parts Or two shares And send each of the Cells to a different server Each server then Compute some Then evaluate some function Is a shell And this is how the HSS scheme goes So what are the requirements We have from A good HSS scheme We want privacy We want that each of these shares Does not easily reveal information About the original secret input X Another thing we want is efficiency We don't want the servers to We don't want The overhead of the evaluation Of the servers to be too high Compared to F And finally, we want correctness We want to easily To easily recover f of X From the outputs Of the servers So indeed After presenting The concept of HSS BGI managed to construct A group-based HSS protocol The protocol The security of the protocol Realize only on the traditional Decisional Diffie-Aleman-Arnes assumption And also The The communication complexity of the protocol Is low However The construction is only Good for a restricted class of functions Mainly for branching programs HSS As many applications such as Private information retrieval And secure multi-party computation In sub-linear communication The sub-linear communication is the Interesting part in here So let us now go Present the HSS protocol In more detail The HSS protocol Deals with Functions F That can be implemented as a sequence Of the following kinds of instructions The main one of them Are the middle two Which enables us to Add Two memory variables And The second one allows us to multiply An Arbitr memory variable By only an input So this is where HSS is not generic What these servers do The evaluation functions That are on the servers Mainly Simulate the function F Instruction by instruction The main thing about them About the evaluation functions That they preserve the following variant That Each variable Y Appearing in the program F Is equal to Two corresponding variables That fly on the servers So This means that the servers Share Each memory variable In the program of F So basically It is pretty easy To implement An additional instruction Because it is linear So it behaves good with this kind of invariant And the problem is To To implement The multiplication The multiplication Instruction So in order to see how to do this Let us go even further In the implementation Of the HSS protocol So the setting is That we have some cryptographic group Generated by some generator G And Suppose That we have some Output of a multiplication Of a multiplication Instruction So If we So it turns out That the parties can Multiplicatively share G to the power of Z That means that Each one of the servers Has some group element So that the product of these elements Is G to the power of Z And The main problem In the HSS protocol Is to somehow transform This Multiplicative share Into additive ones So this is the share conversion problem And it has a very simple solution Basically If you just Computer discrete logs Of your input You can transform Multiplicative things Into additive ones However, this is very inefficient Taking discrete logs out So in order to solve the share conversion problem BGI introduced The DD log problem Distributed a discrete log problem In this problem we searched for Two algorithms A and B And so that So that They transformed the input difference In the In the exponent To an integer additive difference in the output So there is a difference In the exponent and they transform it To Some difference in the In the output So Basically we can do this Just by taking logs However, we search for a tradeoff The running times of the algorithms A and B And And we give them The space to 12 With some probability So we search for a tradeoff between The probability and the running times So if you See the details And you can see that You can solve the share conversion problem If you have a solution for the discrete log problem So let us now Get to our results So Our main result Is an optimal DD log protocol We devise a DD log protocol With error probability of 1 over T squared Which improves upon the BGI DD log protocol That obtains an error probability of 1 over T And this Error probability of 1 over T is Basically the error of the Basic algorithm we had earlier So this is the main The main result The second result Is The optimality part of our Algorithm Which means that the protocol is optimal In group satisfying the Discreet log in a short interval Arness assumption Which is assumed to This assumption is Assumed to hold in all standard cryptography groups Lastly We can apply the DD log protocol And And improve The efficiency Of the HSS protocol So basically we We improved the running time Of the evaluation functions Of the servers From S squared to S to the 3Fs Where S Is the Number of multiplications In the program F We want to compute So How does the DD log protocol Implies HSS For ever multiplication HSS needs to Solve some DD log problem So If we look on All the DD log problems needed To be solved in a single In a single program And use our DD log protocol The total error probability Of the DD log algorithms Is S over T squared By taking the By taking the running times Of the DD log protocol to be S squared root of S The overall error probability we get Is a constant probability This means that the Evaluation Of the time For the only evaluation of the servers Is S times T Which is S to the 3Fs So this is how we improved the HSS protocol And now Let us see How the problem A solution for the SPACIF problem Implies a solution for the DD log problem So Basically To solve the DD log problem You just Make A and B Arrive on the On the array Full of powers Of G And start them on adjacent cells Then we Solve the SPACIF problem With our black books that solve SPACIF's problem And after Alice and Bob Synchronized on some On some position Then each one of them Can output the distance Between its starting point And its stopping point And because Alice and Bob Started on adjacent cells The difference between Their outputs Is going to be one as needed Except for, of course, the probability The SPACIF's problem was not solved We also need To perform an pseudo-random function Over this array So that Because group elements are not Really random Okay, so this This is why, okay, now We get to the SPACIF's problem Again And we want To improve the basic algorithm So let us revisit the basic algorithm So suppose We, as a game of thought Suppose we let the basic algorithm Only f the number of steps we Had before So how does this going to affect To the The probability of the algorithm So basically because the probability Of the basic algorithm is Of the order of one over t Using only f the number Of steps increases the probability By a constant But now we have many steps left So the main question Is how should we invest This remaining steps in order to Reduce the overall probability The answer for this question Is a two-stage algorithm The first stage in this In this algorithm Is just using the basic algorithm With f the number of steps As we just saw And the second stage Is to Is basically the same Each SPACIF just Read some From the array And then goes back and stop on the minimal value It encounter So in this sense this is the same However this time The The values we query From the array are on some Random work This random work starts from The stopping point Of the former stage And then The steps Of each of the The steps of each SPACIF Only depends on the Most recent value Read from the array So this is how the Two-stage algorithm goes And Let us quickly analyze This algorithm This protocol So the main point In this Two-stage algorithm Is that If the party Managed To synchronize in the first stage They will remain They will remain Synchronized in the second stage Because The size of the jump Only depends on the current Value they read And do not forget That the error probability of the first stage Is already small Is already 1 over T So Now we want to understand What is the error probability of the second stage Given that the first stage failed So the So the SPACIFs Begin with the distance Of T And they make T over two steps of size Square root of T The idea is that The SPACIFs Are going to The random walks of the Second stage are going to meet In about square root of T steps Why is that Because In order for A to pass The location of B It needs square root of T steps Because each step size Is about square root of T And the initial distance between A and B Is T After reaching The region of B Then It's a standard birthday paradox Argument that says that The random walks of the parties Are going to collide in about square root of T Steps So overall The parties share About share older steps Except for square root of T Steps This means that Because they share almost All these steps they have Then with very high probability They are going to have the same minimum And so they are going to To stop on the same location Overall the total failure probability Of the two stage algorithm Is 2 to the minus 3Fs Which is better than the basic algorithm Of course So we can ask this question again Suppose we Have some black box That implements The two stage algorithm Now let's give it only F the number of steps we have So again This only going To increase our L probability by a constant And But now again we have many steps left So we can ask What can we do With the remaining steps we have So the answer is We perform a third stage We perform a third stage Which have a larger Step side So continuing This line of thought We use many steps With increasing Step sizes And And by taking There are many parameters In which the algorithm depends on For example the number Of steps in each stage And the sizes of steps In each of the stages So if we Carefully choose parameters Then And use many stages Instead of 2 or 3 We can get An error probability of 1 over T2 The analysis is quite Is quite complex So we also We proved this formally And also we Validated This result with extensive simulations We proved it But we also validated it Yeah And Okay Let us now summarize everything we have We have some algorithm that enables A and B to synchronize Except for probability 1 over T2 But this used That the We used that the distance between Alice and Bob Is 1 Start with 1 So An actual question is What happens if The spaceships start with Some unknown distance m What's now So it turns out That if they use exactly the same algorithm They can still Meet except for For a small probability The optimal probability which is m over T2 So why that To see this Introduce some fixtures parties That follow Exactly the same algorithm Alice and Bob Follow Now To analyze the probability that Alice and Bob Do not synchronize Notice that in order for Alice and Bob to not Synchronize we must have Two consecutive People that Did not manage to synchronize because If everyone managed to synchronize with Their successive Then so Alice and Bob will synchronize So the probability of Alice and Bob to not Synchronize is union bound Union bounded by the probability of A and C to not synchronize plus The probability of C and D to not synchronize With the probability of D and E to not synchronize and so on And this gives that The parties synchronize Except for this small probability Another interesting thing Is that our algorithm is optimal Basically The The discrete log in a short Interval problem is Suppose we have some cryptographic Roof G generated By a generator small g And let R be Some small Small interval So given some Input g to the x where x is small You need to find it. This is the Discreet log in a short interval problem And the discrete log In a short interval hardness assumption Is Solve the discrete log problem in a short Interval So If you think about it a little You can see that the dd log protocol Solve The Discreet log in a short interval In the optimal time So The dli hardness assumption Is assumed as far as we know On all standard Families of cryptographic groups So Our dd log protocol Is optimal And let us summarize everything we had We presented the Distributed discrete log problem Then we presented an optimal algorithm Solving the dd log problem And we improved The error probability Of the dd log algorithm From Error probability of 1 over t To an error probability of 1 over t squared Which is quadratic improvement And then we add some application To Homomorphic secret sharing Optimizing The running time of the evaluation s squared To the 3f Where s is the size Of the program We want to Homomorphically compute So In the paper we have Some other interesting stuff Among them we have Form analysis of the protocol We use many kinds of Martin Gales in order to prove this We also have A matching lower bound Assuming the dli hardness assumption And we also have A lower bound assuming the The generic group model And interestingly We also prove That the basic algorithm Is in some sense Optimal It is an optimal non-adaptive protocol in generic group model This basically means That if you make some small variations To the basic algorithm You cannot go better than the basic algorithm And interestingly The proof uses Fourier analysis Thank you All right, so I think we have time for maybe one question Because we're running close So All right, let's thank the speaker again