 Welcome to the presentation of our paper, Direct Product Hightness Amplification. We want to start off by introducing the basic objects we are considering. We are interested in probabilistic games that are interactive objects, where a winner W, that may also be probabilistic, may interact with this object, possibly via multiple rounds, exchanging messages. And the game has a winning condition here denoted by B, which may be thought of as an additional output bit that is monotone, where essentially at some point the game says the game was won or it was lost. We will not talk about a very specific concrete kind of games in this talk, but we will somehow assume a certain kind of structure on the games, so it may be useful for the purpose of this talk to think of the games that we talk about as, say for example, being the one way function in version game or the hash function collision finding game, or say the MAC forgery or signature forgery game for a certain MAC or signature scheme. A natural thing one might want to do with such games is to compose multiple of them in parallel and to obtain a new game, say the end of two games G and H, where a winner W can interact with both G and H in parallel. And this new game is one, if and only if both G and H have been won. This is interesting because it leads to natural constructions to amplify the security of certain cryptographic schemes. For example, one can take multiple weak one way functions and combine them to obtain a new strong one way function. So for example, one would need to only assume or say prove weaker kind of security for the building blocks and then could just use such a construction to amplify the security. The hope is that it's much more difficult to win the end of the games than just win one of the individual games for themselves. And intuitively one would hope that actually the hardness of the end of some games is just a product of the hardness of the individual games because the games are independent. One would hope that the hardness essentially just multiplies. Actually in an information theoretic setting this is not trivial but also not too difficult to prove that this is true in a perfect sense with equality and essentially for any games G and H. In this work we are considering the more difficult computational setting where one would basically need to argue via a reduction. Basically reduce the winner W to a winner for just G and H and then that the hardness multiplies is only true in an approximate sense not with equality anymore. Before we look at this reduction in more detail we state our main results. First we state and prove an abstract hardness amplification theorem that is simpler, more general and stronger than previous theorems of the same or similar kind and our focus is two-fold. First we try to provide close to optimal concrete bounds as opposed to asymptotic ones whereas still these asymptotic results basically follow a simple corollaries to our concrete bounds. And second in the spirit of abstraction we try to distill out the essence of such hardness amplification results and try to provide a theorem that is not only simpler but also so general that it allows for reusability. We then show in a second part how to apply and essentially instantiate the theorem to non-trivial interactive cryptographic games such as the Mac forgery or signature forgery game. Let's look more closely at the reduction we are considering. We are considering any winner just for the end of these two games G and H and we want to turn it so this is the winner, we want to turn this winner into a winner for only G and for only H so essentially we are considering two reductions and the only straightforward way to do this in a black box fashion is just to absorb and say essentially simulate an instance of H towards this winner W to obtain a new winner let's call it W H just for G. So this is a winner that just plays G and likewise we can just absorb G into W to obtain this winner W G just for H. Now we would like to argue that it is easier to win just G or just H than to win the end of G and H. So we would somehow like to argue that the winning probability here for just G or here for just H must be higher than the winning probability here for the end of the two games. Unfortunately though, if you think about this it's not that hard to see that actually can happen that the winner is such that always both games G and H are one, never just one of them. So either both or none of them are one and if the winner is of this type then actually the winning probability here and here will be just the same as here on the left. This is why it's necessary to somehow boost the winning probability of W H and W G and this is where you need a certain structure on a game. You need to assume that the game has a sufficient structure that allows to repeat to boost basically this winning probability. Typically this is done, this is really the standard way to repeat a winner Q times for some number of Q. So basically the game needs to allow us to attempt multiple times and many easy games have this kind of structure. For example, if it's the one way function inversion game where we are given an instance that we need to invert say find a pre-image then we can just try multiple times of course and then once at some point we are successful at least one attempt we invert the image and then we found the correct pre-image and then we are successful, we have won the game. So let's look at an example. Say we are considering a winner W that here on the left has a winning probability of Delta say 1% on the end of G and H and now we consider these winners but boosted by repeating them Q times such that we win if we win at least one of the Q attempts that we make here and here. How often do we need to repeat? How large does Q need to be to obtain a winning probability that is close to the square root of Delta? Remember that we want this basically probabilities to multiply. So we want this to be basically the square root of Delta but say we want only to get close to it say we are good not with 10% but only say 9.9% for G or H and by the classical analysis one can show that if one repeats for say around 76,000 times then in at least one of the cases here we get actually this winning probability of 9.9% at least. If assuming of course the game has this structure that it allows us to repeat. And the question is, is it really necessary and what is the optimal number of Q here? Obviously this is basically a tightness gap in the reduction and this should be as small as possible. This Q should be ideally very, very small. As a consequence of our new result we can plug in the numbers and we immediately get that actually for these numbers only 90 repetitions are sufficient. And this is quite close to optimal one can easily see that actually 44 repetitions are really necessary here. But what are we analyzing here exactly? Let's lay out the setting. We consider the games G and H to be probability distributions over some finite sets, calligraphic G and calligraphic H respectively. And these are basically the deterministic, the sets of deterministic instances of the games. So these calligraphic G and H. And it doesn't really matter what kinds of objects the games are themselves. We just assume these to be some kind of probability distribution over deterministic instances of games. And then we fix any winner W for the end of G and H. And again, it doesn't really matter what kind of object this winner W is. The only thing that matters is that it induces a function Mu that basically tells us for any pair G H of deterministic instances, what is the probability that W wins both G and H? And then of course, we are interested in the probability that W wins the actual games, the end of this actual two games G and H. And this is just the expectation over G and H over this function Mu, where G and H are independent. And now we look at the reduction that we've shown before, where basically we consider this winner W H, but repeated Q times, such that he's successful if he wins in at least one of the Q attempts against G. And it's easy to see that this has to be at least this quantity. So this is just the expectation we play G, so we take the expectation over G. And then on each attempt, we have this, at least this success probability, simulating a fresh instance of H towards the winner. And then we repeat this Q times independently, and then this is just the probability that we win at least once, or that we do not lose Q times. This is function psi here. And then we can do the same for the reduction towards H, where somehow just these expectations are swapped. It's just we play H, so the expectation is here over H. And now the goal is essentially to analyze the relationship between this expectation, the probability that W wins the end of G and H, and these two expectations here, the probability that the winner that we derived from the reduction wins G, and that the winner wins H. This is basically the winning probability of H, this is the winning probability of G, and this is the winning probability that W wins the end of the two games. And the idea is that we just analyze this for any function, for any distributions G and H, and for any function Mu. The main difficulty is somehow that we have to, we do not really know how this function Mu looks like, this winning structure, where which instances of the end, which pairs are one and with what probability. So we have to assume the analysis has to work for basically the worst case Mu that we could get. So for example, this Mu is essentially a two-dimensional map. It could look like this, right? Assume that our winner is deterministic, doesn't have to be. So we cannot just assume this, but for simplicity, just let's just consider a winner W that is deterministic, and wins the instances basically that are just a Cartesian product of two such subsets of G and H, denoted here by S, G and S, H. These are the instances, basically, if a pair of instances hits into this rectangle, then both instances G and H, the end is one, say the winning probability in here would be one, and it would be zero outside. And this is a very simple case is somehow straightforward. It would also look like this if a W played the games G and H independently. But in general, the analysis has to take into account basically any kind of shape, not only the rectangle that we could have here. And this is somehow the difficulty that one needs to tackle. Okay, so let's look at how we state the actual amplification theorem. As we've seen before, we just assume G and H to be any probability distributions over some finite sets, caligraphic G and H, and then we assume just some function Wu, which is the winning structure essentially describing on which pairs of deterministic instances the end is one, but just any function Wu. And then we assume some monotonically increasing boosting function for simplicity, we just let this be any function. And then we assume that what corresponds to this expectation here essentially corresponds to the winning probability of the winner through the reduction on G to be bounded, and like somewhat small and the winning probability on H to be somewhat small. And the expression here is just chosen so that we can state the result in a convenient way. And then we can apply the classical analysis and see that this expectation here, which corresponds to the end, the winning probability of the end of the two games to be even smaller than this and that for themselves. So you see that you actually get amplification. But now we notice that once we've basically removed everything that was unnecessary and state the result at this simple and general sort of abstract level, it becomes easy to actually check whether this is really optimal and actually to show something stronger. So we then observe that if we moreover assume that this boosting function psi is not only monotonically increasing, but also concave, which it typically is because this is the standard boosting function that you would use for these kinds of results, then actually you can show a stronger amplification. So you can show that actually this expectation here is even smaller than with the usual analysis. And this is basically what gives us these better numbers that we've seen before in the numerical example. So finally we state some more results that we have in the paper, but where we have no time to discuss them in this short talk. So obviously we just talked about two games, say G and H, the end of two games. And obviously we have an end fault variance of this where it's basically about amplification for N games. And then we also have corollaries for not just any function psi, but this typical amplification function psi here, one minus one minus X to the Q, because this is the typical case and we give simpler expressions that work just for assume this particular function. And then we have a tightness discussion where basically we show that what we do, the kind of amplification results we obtain are close to optimal, but still not perfect. And we have a conjecture for a perfectly optimal amplification result, which is somewhat interesting in which we would be quite excited to prove at some point. And we think it's doable. So loosely speaking, the conjecture says that at least for this particular amplification or as a boosting function one minus one minus X to the Q, the worst case that can happen in terms of amplification is that either the winning or the losing probability of the consented winner is maximally concentrated. Where maximally concentrated essentially means that there is a rectangle, one rectangle where the probability of winning or the probability of losing is one in the whole rectangle and outside of the ranked angle it's zero. And then finally we show some applications to games such as the Mac forgery or signature forgery game where the key message we try to convey is actually that now we can basically inherit the amplification just from the amplification theorem that we have proved. And the main analysis for these games is just concerned with explaining how this boosting actually works because it doesn't work in the perfectly natural way as it does for say one way functions or similar, say hash function collision games. And if you're interested in any of this, you can just check out our paper. It's on e-print. Thank you.