 Hello, my name is Jorge Chavez and I will be presenting in Azure Grid the work Shaluba and the Bush Time Indifferentiable Function to Lipty Curves by myself, Francisco Rodriguez and Mediti Bushi. This work represents a big leap for hashing into ordinary Lipty Curves as it results in the first known constant time in differentiable hash function to a large set of curves that requires a single square root computation. It also provides an efficient point representation algorithm which is essentially the inverse of this hash and both of these constitute what we call an admissible encoding. So a quick word on the problem of hashing to Lipty Curves. A lot of applications require hashing to a cryptographic group and if this is a finite field then it's as simple as just hashing to a long bit string and interpreting it mod Q. But in the case of a Lipty Curve cryptography this is not trivial. So the problem that we're faced with is how can we generate X and Y coordinates that look random while still satisfying the curve equation. The basic idea is that we obtain a hash into the curve by first hashing to the field of definition and then composing with an encoding function which is a map from the finite field onto the Lipty Curve points. So we have the forward direction from the field to the curve and then we also assume that we have a function in the inverse direction from the curve to the field and usually the function is not one-to-one so the inverse is allowed to return any one of the possible parameters at random. We obtain a hash function by first hashing into the field and then encoding the field element to a point in the curve and it can be shown that this is a graphically secure hash as long as true conditions are met. The encoding has to be efficiently computable, it has to be regular meaning that the distribution of images is close to uniform and it has to be sampleable meaning that there is an efficient algorithm to compute uniformly random free images. If the encoding satisfies all three properties we say that it is an admissible encoding. Now the main ingredient in our construction is the Shaluv and the Bushtan map which is actually a family of encodings. So for each field element u we obtain an encoding fv. These encodings are quite efficient as the main cost is a single square root computation. However for any choice of u the encoding fv is not regular. There is a construction which we call alligator square that solved this by taking two evaluations of fv and adding them together. And it can be shown that this new construction with a domain twice as large is in fact admissible even though now it requires two evaluations of f and hence two square roots. And this represented the state of the art so far. Now our new Swift DC construction what it does is that it still uses a domain twice as large but now the second entry to the function is u itself. So rather than being a fixed parameter it becomes a second variable and what we gain is that we're back to doing a single square root meaning that our construction should be roughly twice as fast. This change is quite simple to convey but it has very deep implications. The sw maps are actually a composition of maps that go through various intermediate spaces parametrized by u. So now that u is a variable the geometry is completely different and we need a new proof of regularity which is one of the main theoretical contributions in the paper. On the other hand we also have practical implications because before it was known how to evaluate f u efficiently as long as we allow for some pre computation after learning u. But now what we need is an algorithm that will evaluate f u for any choice of u efficiently. And we show that this is in fact still possible as long as the size of the field is 1 mod 3 the curved discriminant is a square plus one other condition that depends on the curve. Unlike alligators squared this means that our construction is not applicable to every single elliptic curve but it is applicable to a large set of them. Additionally the conditions are not invariant on their isogenes so as long as the q equals 1 mod 3 condition is met the other two conditions can be bypassed by just finding an isogenous curve that does satisfies them and then composing the 50c construction to that curve with the isogeny. And finally we also present the alligator swift algorithm which is the one that computes a random free image. This is useful for representing points as uniform bit strings which was the main idea of the alligator construction. And the way it works is just by picking a random u and trying to invert f u of x. Because the f u's are not regular it means that for any given u it may be that x has no free image. So in that case you just have to restart and from there you just have to do a bit of rejection sampling to ensure uniformity. You can actually reject by doing quadratic reciprocity tests before committing to any actual squared computations. So in the end you always do at most two square roots which is a big improvement over the previous uh alligator squared variant. So that is a short summary of our work and if you want to hear more about the details then we look forward to seeing you in ASHA group. Thank you.