 Thank you very much for the introduction, just as a short introduction, as you know a defect is a complete defect head path, connecting NSMOS that you know in the context of cryptanalysis a defect is connecting each in set of starting states S with each states in a set of ending states. So it can be a crowd which applies to the certain number of sets, otherwise said which connects a certain number of starting ending states, you know, on a little note, over that cycle, which was introduced by the Sended Labor by Kravatovich Reshparka and Zalelyeva as a formalization of so-called each structures in splicing-cutted text which were introduced by Hoki and Suzuki. So in the context of cryptanalysis the vertices of the graph are just starting ending states and the edges of such a graph are encryption paths which where each path uses a different key. Since the first, while the first attacks were, yeah, zero-gram free image and zero-gram images attacks on round-reduced chart two, scale and their compression path, they were soon later adapted for key recovery attacks on the full radius by Bogdanovka and not much later, many more key recovery starts followed by, for instance, one square by Mada on RIA256 by Chehenchuk and also by many other who contributed to this ongoing research. While the first introductions were covering independent copy text in a series of paper, Kravatovich, Laurent, Reshparka and Bogdanov, introduced several further approaches to this, for instance, probabilistic ethics or ethics for rotations. In late 2011 and early 2012, the AS paper of Kravatovich was very interesting to us since we wanted to consider knowledge or recent cryptanalysis techniques and our initial aim was to completely understand the text by Bogdanovka and in the development then we decided just to have a small framework, which was then us and in general had a big part of the analysis to find independent e-keys of Mexican life. We focused on independent e-clicks because they're very interesting, there's a generic method, but more importantly they're very formalized and they give us, from the independence of the branches, which must apply a very well-defined criterion which can be automated to be tested. So in the remaining 15 to 20 minutes of our talk, after giving this short motivation, I will just, in brief, give you the necessary details, again, for e-clicks analysis, have a look on our frameworks, show you the results and could be up to a brief because they're all our experts, and what is e-clicks analysis in short. So of course, since it was introduced as a replacement for initial structures, given a primitive aside for compression function, we could just use splitting of this primitive, I mean, the usual spryzen-cran setting. So we are going to be in the middle attack session setting, so one can define in this spryzen-cran setting an arbitrary start and an arbitrary matching point. In the primitive, for instance, we can say we define splitting over primitive e into e1, e2 and knockout e. So like the initial structures, e-clicks are constructed just for a certain number of steps around the starting states, which we just say here is knockout e. In the following, then Resperger et al. introduce several construction algorithms. For instance, one can use a starting state as 0, define the first basic key, k and c0 here, and compute in forward direction to derive the ending sense as 0. Then, the adversary has to find 2 to the d good forward or delta differentials and compute for any so derived new key, 2 to the d times in forward directions. And if it's a good cipher, one will expect to arrive at different cipher things. Similarly, then, G4 has to find 2 to the d good backward differentials. It has to compute 2 to the d times in backward direction. Now, since these are other differences, key differences, one should expect to land or to arrive at different starting states as 0. And the crux or the interesting observation by Resperger et al. was just if it creates an independent, this means if they could not share any active non-linear operations, then they could fix every of these encryption paths through one of the starting states and through one of the ending states. And with their B-plicks, they could so test a set of 2 to the 2 d keys with only 2 times 2 to the d computations, which results in a significant computational advantage for the sub-cylinder cipher here. And in the case where now one has them meeting the middle attack or splicing pattern attack, of course, in this setting, then the adversary can use the gain advantage from the splicing path for the meeting the middle attack for its attack. As an alternative, if the number of coverage rounds of the parts not covered by the B-clip, they proposed a technique called, which they called measuring the computations, which allows the adversary still to cover any number of further rounds, while not having to depend on an existing number of attack. And this technique, of course, the adversary derives first the plain text or cipher text, which are the end, which is closer to, for which requires less computations in the setting, and takes one direction only, which suffices here, for instance, in the setting, it suffices to use a decryption, only a record to obtain the corresponding plain text for the cipher text. And in the following, first, she computes in forward direction from every plain text in the setting to some Joseph matching state, and additionally, does computations from the immediate states as j to the post of the matching state, and stores these values. So in a following, then, the adversary for all remaining 2d to the 2d minus these two times to the d, computations, the adversary has only to recompute those parts of the remaining rounds, where the states differ in computations. So from that aspect, the computation advantage can be reduced significantly, also in case to a no-media-media attacks. Additionally, this advantage can be further reduced by using the matching methods. So in the seventh paper, there was a great discussion whether or not attacks using exhausted parts are now raised or not. I do not want to contribute to this discussion. Herr Speer-Gaschia and Juan consider themselves, that put forth like the analysis is not able to conclude whether or not a particular investigative science because, in fact, it is a generic technique that can be applied to any primitive and any number of rounds to use. They are also by a university famous people, entries saying that the low computational advantage is not so relevant for key links such as 128 or much. Nevertheless, to say more generally, beating attacks can really and deep help to be right a new lower computation account for individual scientists. And that's, yeah, it's not more but it's not as important. And for us, since we want to consider independent inklings, our major motivation was really on the most important step. So how far can we construct a big leader and so how many rounds can cover forward and use the case in some economy. And yeah, to help to put the analysis then with the remaining generic things. Our framework just performs basically these three tasks which the most important, of course, is to have a look on the big click search. Then, yeah, for the remaining rounds to identify a matching which minimizes the number of paths which have to be recopy within those ones and to give a visualization to the analysis group. The task of big click search for independent inklings is, can we be, we choose to finding a pair of a set of differences that I am, and number j would share with the components and nonlinear functions. So for instance here, the AES is depicted with the forward, where the forward different one that the differences are between the left side and the number differences are depicted in the middle. And since the AES is nonlinear operation, the AES box or the sublayer, one can see that is just an example, as an example, are independent from each other. So why we could test for a given cyber and a given dimension of the cyber is every possible, yeah, every possible key difference in forward and backward direction. This task scales quite heavy. For instance, for a key size of 128 bits and a particular dimension of B4A, one could test for one direction 2 to the 37 around or 10 to the 12 differences. Why one had to test, since one had to test so many forward and same number of backward differences. Also in that simple setting one already had to test about 2 to the 17 differences, if they are independent on that. Nevertheless, of course, for Nibblewise or Byverse operating primitives, one can reduce time and memory complexity drastically by considering them. And for instance in this setting, in Nibblewise primitive, for instance like the LED64, one had to test only 432 or 2 or 496 differences in one direction for Byverse primitives such as the AS128, this task we use is 2601, which is very, so to reduce one word of how one should try to answer key differences of all. It's desirable, since we want to obtain a longer speaking, it's possible to affect as many paths as we can. And therefore, it's desirable to inject some key differences with the least possible handling rate at the beginning of the data and the end of the number differences. Moreover, for cycle rides where we have key length, which extends to some key size, it is in many cases possible to pass one round for free and one even more. And then to inject as little, as little paths, which acted as late as possible. So just to be consistent, we have chosen the way of considering case sub-cubits for cycle key sizes, which extend the sub-cubits. For instance, it applies in the AS for certain AS like cycle, for example. They have, of course, different variants. The very basic possibility which I sketched here is just to inject the minimum number of bits which are defined by the dimension. Of course, this is a desirable aspect, yet in some cases this may not lead to the optimal number of routes which could be cut apart. For instance, one could think of ejecting the equal differences in more, several bits, bytes, or nibbles, depending on the cycle, in the hope of cancelling our results of the round for transformation. Moreover, of course, we need to provide the option to use more sophisticated custom differences such as the result from this column or your next column or, yeah, different results of the round for transformation which may be cancelled in order to obtain them. Since testing all those options of the third is infeasible, we leave actually the specification of such custom differences to use as a consequence. So far to be exact, so much for the matching on the remaining rounds. We offer this option that of framework allows to test all rounds of the remaining parts where to split in order to minimize the number of we can do the path of the cipher and to test all possible paths of the states which could be as general spoken which properties do we have in our framework. We possess that we compute and store four differentials first, then compute the number of differentials and test each parameter independently. While this is for the first two options, this is quite a very feasible task for most ciphers. In the case of sophisticated differences, if there's not then the differences do not fit in the memory. You can just perform the de-click search in iterations. This takes a little more time, but still really good. As you have noted, it would be desirable to have round rest, encryption and decryption. Since we really want to inject some kind of rest, we need an invertible key schedule if we want to do it just to know if or that key splitting is really fine. For instance, as you may remember, for the AES and AES like ciphers, one can reconstruct the secret key from the number of key sub-sequent bits in any position of the array of the sub-secrets. This applies only to many likelihood ciphers. For instance, present-like ciphers which have the key register of where the secret key is stored in the beginning and which can be updated in an inverted function. So, while this applies for many ciphers, basically for those which were of interest for us, ciphers such as ARIA, for instance, which do not have an invertible key, so for those we say, okay, then we have to inject or start a key difference in the secret key as for the solution. Nevertheless, we need a consistent interface which is provided by framework and cycle implementations which state-to-state. We go to the usage, we provide replications as the end requires to be the exception matching, so if we request from the user to specify this target cycle, strategy how she wants to build starting key difference, as well as a cycle-dependent strategy how to replicate nonlinear operations in order for me to test the differentials correctly in dependency, that is, I repeat the mention and just the maximum number of tests that are on so far, she would like to be tested. If the B-click was a number of ones, this is found for multiple serialized and the matching then just affects the desired type and serialized B-click to determine then the matching and derive the complication and complexity and outputs this to the user and for initializations this is a really good task to have it. We decided to output B-click and the matching sequence in B-click format just to have a really short initialization. For all results, we're happy to use the results by Bogdanoff et al on the AES, which we could use not to adapt or implementation, but just to where we find that it's working for and we were very glad to obtain considering the number of rounds that we get the results as then. In the following, we then considered further AES like cycles such as BKSQ or which is a 96-bit version of the AES or kassad and had a further look on AES like like BESI or such as LED line and the core prints and also considered present. Of course, one has to compare these results to previous or to all contributions which were named at the beginning of this home. Of course, for instance, the AES results by Bogdanoff et al but also through those by Malas and many other researchers. The expert might have seen it and if there's would have been enough space to place two tables next to each other we can see that the computational complexity of their results is a little bit better than ours. This isn't fact, this is of course due to the fact we use an automated approach and this is quite, you know, this is okay to us since we say we really want to consider how far do we want to come considering the AES. So our framework is very good of giving a first impression how far and at first we can apply an AES in an attack on a certain primitive and it's always best then to deeper investigate the considered primitive, find maybe more sophisticated limits and then on primitive schools to have a quick verification of these results if the connotation are against it and then efficiently we would choose so far to restore. Thank you very much for your attention. So are there any questions I wanted to ask you? So here it looks as if you have considered mostly the application of your byklets as when they are used for performing and it's rated so it's not and my question is do you think that optimizing byklets for this is the same as optimizing them for when they are used just as an improvement of classical intermediate attacks or you don't want to search for the key or say it's in another way do you think that your tool that you can use for finding the optimized byklets for intermediate attacks that reduce the number of the given search? Actually of course all sizes which have a certain diffusion in the key schedule it's you should look and optimizing the length a certain part of the intermediate attack because then a big click can restrict the differences used in the forward and backward part of the attack too much and can lower the length of these parts significantly. For instance for the AS which has a slow diffusion but has a certain diffusion from that what I know is one should just try first to find a long intermediate attack and then derive a BK from the use. At the moment we really wanted to consider situations where there is no meet and a middle attack on the number of remaining routes so we wanted to really just so if there is one then the situation is much better and we just wanted to contribute to finding new lower bounds on some number. Okay so in the other case instead of trying to find the longest byklets maybe you want to find the one that adapt the best non-key bits? For the AS I have realized that it's really better to work on the optimized parts. Okay thank you. No more questions? So let's thank the speaker again.