 Hello, thanks for the nice introduction. Now we have a completely different talk. It's about primitive design, so there are no proofs in there to loosen the thing up. This talk is about a stream cycle called Resta, which has a low NTF and few Ns per pit at the same time. So first of all, a bit of motivation. Why do we want to design such a specific construction? Over the past years, we have seen several applications in fully morphic encryption, multi-party computation, and even post-quantum secure signature schemes that can profit from dedicated symmetric primitives which minimize the number of multiplications in one way or another. Clearly, this has then been addressed by designers who designed new primitives. For instance, flip, crevium, low MC, or MIMC. We're the first three of those primitives here. Just focus on multiplications in GF2, which are simply end gates. From a research perspective, having a new optimization goal is quite interesting because this allows or enables new design strategies for symmetric primitives and also requires new crypt analysis techniques to get insight into the security of the resulting constructions. If we just consider the number of Ns for one moment, then one can minimize several numbers within his ciphers. For instance, a designer can try to minimize the total number of Ns used per primitive call, what's also possible is to minimize the number of Ns per encrypted bit, and also the NTF, which is the number of cascaded end gates the key has gone through. And if we look at existing designs and print on the x-axis, the number of Ns per bit required, and on the y-axis, the NTF, we see that existing designs either have a right or high NTF and few Ns per bits, or they have a high number of Ns per bits and low NTF. And we were asking ourselves the questions, why is this the case? And can we design a primitive which minimizes both at the same time? And hopefully we could give a positive answer with Rasta, which as you can see, minimizes the NTF and the number of Ns per bits at the same time. So clearly during the design, we faced some challenges. And the first challenge we had is how to design such a thing? How to design a cipher which minimizes NTF and Ns per bit at the same time because to the best of our knowledge, such a construction has not existed before. So it's likely that we might deviate from the classical design strategies. And especially in this case, the low NTF seems challenging to us. And then if we come up with a construction with a design, then we face the problem, how to analyze the outcome and how to argue about its security. But first of all, in classical designs, why do we have a high NTF at all? So let us consider some symmetric cryptographic primitive for instance, block cipher, which takes some input and processes the input under a secret key and produces output. For classical application, such a block cipher or symmetric primitives look like this. It's a static set of functions where only the inputs change. Such a construction has the benefit that it can be efficiently implemented in hard and in software. But also due to this fact that this is a static system of equations, we rather need a high NTFs and a high degree because otherwise having low degree would mean that we do not have protection against certain classes of attacks like higher order differential attacks and cube-like attacks. For instance, let us consider an attacker who has control over the input and we have some symmetric primitives which has low degree output function. Then an attacker could do the following. He could just set some input bits to a constant value while iterating over the all possible values of some bits. And then as you can see here, if he sums up the output bit, he gets linear equation to keep it which is quite bad. So how did we overcome this fact? Our idea was that the problem lies in the fact that low degree function is evaluated several times. But what if it's just evaluated for one time? So we end up with Rasta as you can see here and the idea is to take or define a family of different low degree permutations which are just evaluated once by the key and feed forward and to produce in this way the key stream which is used to encrypt the plain text. The choice of the permutation here personally relies on public parameters which is the public nonce which has to be sent together with the cipher text and also block counter for the different number of blocks. So clearly one way to do this would be to create and design many different permutations, billions of permutations but this is a time consuming task. So how did we design the permutation of the cipher so that the equation system changes every time? And actually what we are doing is we rely on the classical layer structure we see in symmetric primitives which consists of the alternating applications of linear affine layers and nonlinear layers. But in contrast to classical designs for Rasta the affine layers are nonce dependent so they change for every different block and every different nonce. To be particular what we do in Rasta we seed an extendable output function with the public nonce and the block counter and out of this stream which comes out of the extendable output function we create invertible matrices and round constants to get our affine layers. In contrast to the always changing affine layers the nonlinear functions are static. Those are simply the chi function over the whole block width, okay. And this is also the reason why Rasta is only defined for an odd number of block sizes because only for an odd number this chi layer here is invertible. So the high level idea to make relevant computations of the cipher independent of the key was first used in flip but here it was used to permute the input key to a static set of function and we are taking this approach further in making more parts of the cipher essentially changing. And what's also important is since the extensible output function do not uses any key material it doesn't influence the relevant end metric. So okay, so what's the idea behind this design? As mentioned before the idea is that we have changing affine layers to get changing system of equations to provide protection against differential and possible differential attack, Q point higher order differential attacks and also the interior attacks. However, some attack factors remain especially attacks which exploit potential good existing linear approximations or even attacks targeting directly the polymer system of equations we get. And for this reason we also might to use a block size and the key size for Rasta which is bigger than the security level. So with what we end up now is kind of parameterizable problem regarding the block size and the number of rounds. The question we face is now how to choose the block size and the number of rounds. And actually we have two answers to this problem. The first one is a parameter set which we call Rasta as the design strategy which is based on bounds and arguments. And this is our conservative approach as we'll see later why we call it conservative approach. And the second approach is Ag Rasta which is our aggressive parameter set which is solely, which is the aggressive parameter set for the Rasta design strategy where we only base our parameters on the best known attacks and it's kind of a challenge to crypto analysis. So how do we get to the parameters for Rasta? First of all, we do not want to have in Rasta good linear approximations. So we have to point to probability that good approximations exist. And we can do this in the following way. We can kind of partition Rasta into the sandwiches of matrices and non-linear layer. And then we know that a good linear characteristic has few active bits on the input and the output of the matrix and the probability that such a good linear characteristic over one round exist solely depends on the matrix and it gets more improbable the larger the matrix get. And so we can do this for every two round sandwich and for the sake of simplicity, we just assume that the middle matrix can connect the linear characteristics you get here and here or the best linear characteristics. So if I put this into a figure, you can see that with increasing block size, staying at a certain number of rounds, the existence of exploitable linear approximations gets more and more improbable. So you can do two things to protect the sci-fi against the existence of good linear approximations is to either increase the block size with a fixed number of rounds or to increase the number of rounds. So another attack vector is an attack which targets directly the polynomial equation systems you get, so this is not a problem only Rasta has. So in principle, it's a general problem for each sci-fi how to argue against such attacks which directly target the polynomial system of equations and why this cannot be solved. In Rasta, we have the specialty that we kind of hardly limit the degree because we want to get the low NTF which also limits the possible number of different monomers we have in our polynomial equation system. And if this number gets too low, this might allow trivial linearization where you just replace the monomers with new secrets and then solve the resulting linear system of equations. And for these reasons, we might want to increase the number of secrets and also block size to get more monomers. Also if we put this into figure, we see that the number of monomers for a certain fixed number of rounds increases with an increasing key on the block size. And what's also shown here is how the number of monomers behave if you guess a certain amount of keep it at the input. And for Rasta, we have decided that we want to have that the number of monomers matches to the power of the security lever under guessing the keep it and to protect further against such attacks you also apply a limit on the data complexity which limits you in collecting enough equations to solve the system. So we end up with this number for the instances of Rasta. In this triangle here, those numbers come from the linear approximations. So from the bounds we get on the linear approximations and this triangle here comes from the fact that we do not want to have a low number of maximum possible monomers. And we do not define Rasta for two and three rounds since our analysis relies on good diffusion. Properties which are not guaranteed for such a low number of rounds. And now we come to crypt analysis or how we get to the per meter set for Ag Rasta. Besides computing such bounds, we also tried to break the cipher. So for instance, we tried to use a sub-server and the result was that exhaustive search performs better for more than one round of Rasta. We also did various experiments with toy versions. For instance, counting the number of possible monomers you get at the output. And we do not have served any abuses or outliers here. And we also tried various dedicated attacks for instance on SAS of various versions of SAS where it just have non-linear and non-linear and also variants of two rounds and three rounds of Rasta where the security level is approximately the block size. And how does such a attack on three round Rasta where the block size is approximately the security level looks like, I will show you next. In principle, what we do here is a guess and determine attack. But in contrast to normal guess and determine attacks where try to get in from both sides and guess bits you do not know and verify the guess in the middle. We just come from the front side since we used the chilea and the inverse of the chilea is quite hard to deal with. So what we do in this attack is we expect the fact that a known as matrix here we can express the input bits of the first S layer as fine functions in the key bits. And if we guess every second bit of the S layer this gives us an equation in the key bits with a solution. But also this allows us to express the output of the S layer as linear functions or fine functions in the key bits. And since we can express the output bits we can also express the input bits here for the next S layer. And here we cannot guess every second bit anymore so we have to guess less. And as a consequence we can only express limited amount of bits at the output of the second S layer as a fine functions in the key bits. And so we actually can only attack permutations which in this case have a weak affine layer here which means we have to search for a block which contains such a weak affine layer that allows us to map several consecutive bits as function of just these bits and not the bits above here. And then what we can do next is guess every second bit of this range here and then we get a linear function of these bits in the key bits which we can connect to the key again if we know the key stream here. So as I've mentioned we consider Raster to be very conservative. The main reason is that I've colored here the parameter sets that we can actually attack for Raster and here are the chosen parameters of Raster. And as you can see in this figure there's quite some distance to the attacker area. And for this reason we have also defined Ag Raster which is just based on the best attacks we have plus one round. So this is Ag Raster here and this should serve as a kind of challenge for future crypt analysis basically to see if they can break in more rounds than we can. And how do we define Ag Raster? Basically we just take the minimum block size plus one bit to get an odd block size and add one more round as mentioned before. So we have four and four rounds for 80 and 128 bit and five rounds for 256 bits. So let's come to a conclusion of what we have seen here. We have seen a new design strategy which is called Raster and two parameter sets of it. One is conservative and based on balance and security arguments and also one which is more aggressive and it's solely based on the best attacks we can do. In the paper you can find benchmarks we did in Helip which shows that even the conservative versions of Raster are competitive with existing designs and as I've shown in the last figure there's actually a huge gap between the known attacks or the best attacks that we can do and the bounds that we get. So thank you.