 So, welcome. I am Vidal Ajax from the IOTA Foundation and today I will present you the work that we have done with Luigi Vignelli and Basil Dimitrov about Implementation Study of Variable Delay Functions. So, one big question about blockchain research is how can we prove that time has been spent? And this is a big question that has many applications in consensus right control and distributed random generator, but not only blockchain, like it is also a question about security and like anti-spamming. So, one of the most use current solutions is what we call a proof of work that anyone should be aware of. So, the quick idea is that using a publicly known hashing function across the network, we have to find an input and such as the hash of this input is lower than a certain threshold, which determines the time if we look for them randomly because there is no other way than just doing it randomly. So, the problem is that it is very fast to verify it's only one hash, so like typically maybe one microsecond and it's lightweight to implement and we don't have a big overhead on the network. It's just like a few hundred bits, but it has a huge parallelization potential and because like if you get 1,000 machines, then you can have 1,000 more hashing power than like just one machine. So, this is the major challenge in blockchain area, like especially for Bitcoin for example, how to overcome this raise to mining that we can see. So, very recently, like two years ago, there had been an alternative that arose which is called verifiable delay function. And what is exactly verifiable delay function? It is a set of three algorithms. One is the setup which is a step that we process at the very beginning of the network and initialize the environment. It is very general, for example, it can be an RSE group or like an LFT curves. It is very general. Then there is the evaluation. When one of the participants of the network needs to evaluate a VDF, it takes an input from a very general input space which is determined by the network and the protocol characteristics and a challenge tool. And it returns a solution in the Y and eventually what we call a proof by which is used to speed up the verification. And finally, we have the verification which is performed by anyone in the network that wishes to actually check that the pair Y pi that the evaluator sent on the network is actually a solution of the input X tau. So, I saw inside about what is a video. So, the evaluation must run in two sequential steps which means that it is not parallelizable. This is the main difference with proof of work. It is that if you have 100 color like inside of the code and here we really care. Vidal, I don't know if you can hear me, but you seem to have a bad connection. We can't hear you very well. Maybe if you can turn down your camera, that could help your transmission, maybe okay, I know nothing about this, but maybe it could help. Okay, so we'll just wait like a few seconds to see if Vidal connects again. And if he does not, then we'll move to the second presentation, but I suggest that we still wait for one minute maybe. So, Vidal, are you back? I see you back among the participants. So, maybe you can keep your camera off. Just turn on your mic and share your slides. You can hear me? Yes. Okay. I saw that I was muted, but I don't understand what happened. Well, you were disconnected. We just lost the connection. It seems that you have a pretty bad connection. Yeah, it was very bad today. I don't know why. I'm pretty discreet. I will try to keep going with that, but I honestly don't know why it is so bad. Okay. So, the evaluation of the RSA-based... Share your slides again. Yeah, sorry. The evaluation for RSA-based VDF is for taking an input x and a change tau to compute y equal to x per tube per tau, which is actually screwing tau times x and getting back the value. You don't see the slides? No. It's not going well. Now we see them. Yeah, I don't know why it was disconnected actually. Okay, sorry. I'm very sorry. So, the evaluation consists in squaring tau times a number in the RSA group, and this is the underlying assumption of RSA-based VDF, but this operation can only be done sequentially and it doesn't have any way of parallelizing it. And actually, if someone were to know the private key of this RSA, which is like the factorization of n, then he can compute it in a constant time, like very, very fast because of modular properties of RSA groups. Okay, so here is the difference between the two VDFs. In the Piatrach VDF, we split the exponentiation that we just computed in log tau small pieces using APR-Chamy heuristic for the security, and then it is reconstructed during the verification. And in the Wysolowski one, we compute a sprue, which is a subexponentiation in which we take the exponent and we divide it by L with L is a small prime of typically like 200 bits. And this prime is also selected using a Fiat-Chamy heuristic. And then the verification is done by computing, by reconstructing by computing the power of pi for L times the remaining of 2 power tau over L. And so the major difference between both in like a high level is that the proof is composed of log tau elements for the Piatrach and a single one element for the RSA group. And that can be a huge difference in networks because one element can be about 2,000 or 4,000 bits. So log tau and we'll see the value of tau. So what is the contribution of our paper? So far we have a formal framework proposed by Dan Bonnet for a VDF to work on that. And we have two major proposals by Piatrach and Wysolowski. And we also have a theoretical survey by Dan Bonnet and his PhD students in which he gives some insights about why these good, why are the two major proposals good VDF in theory. But we have absolutely no idea, like no values of how VDF can be used in industry, in industrial environment and especially for critical timing. For example at the IOTA foundation we want to use VDF as a rate control mechanism because provovor can be very easily spammed. And so verification must be performed very, very quickly. And we had no idea how it was. And there is no implementation values, would it be academia or industry? So what we do is that we give experimental results of VDF and we suggest some improvements by using state-of-the-art algorithms and we make a viability comparison between proof of work and VDF. So what our experimental setup is, we run the simulations on a Intel Core i7 of 8th generation. So like three years old and we have done it using GMP library in C++ for multi-precision computation so that you can have like an idea of how powerful the implementation was. And to say also that we didn't implement using assembly. So concerning the evaluation, in this plot and the exercise, we have the value of tau ranging for 2.20 to 2.25. And here is the computation time. Unfortunately the time is not inserted when it is millisecond. But what is interesting is the trend. We actually find a linear and pretty predictable evaluation time as the theory was predicting. And something interesting are the different lines. In red we took the RSA modular size of 500, 1,000 and 2,000 for blue. And so we see that the modular size has a clear impact on the performances which should be taken into account when building, when using VDF for industrial purposes. Concerning the verification, we have a very nice property. It is that the verification is constant time for the regular speed one. And here I have an issue but the Weizsorowski is the dotted line whereas the Piotrk is the solid line. And we can see that it can be perfect under one millisecond. So this means that the VDF should not be a bottleneck for like transaction verification. And the constant time of the application is something very important because it means that you can increase or decrease the difficulty in the network without having to worry about the impact on like your bottleneck of transactions. And here we have a plot in which we plotted the verification time for the Weizsorowski VDF with a varying size of the L that we talked before which is the prime number by which you divide the challenge when you build the proof. And it is also important to take into account that this brings a trade-off between security and the performances because like just doubling the size of the L number just doubles the time it takes to compute the verification. And so to stick to the Weizsorowski one because it is the one we went for due to the shortness of the proof and the better performance that we found, we found out that the main part of the computation of the verification was spent on rebuilding Y by making a double exponentiation. And this spans between 30% to 90% of the verification part. So it is a very critical part of the verification to optimize for critical purposes where you have to verify a lot of VDF in the same time. So one solution is to use what we call multi-exponentiation algorithm. A multi-exponentiation algorithm consists in computing a product of exponentiation modular one same modulus faster than just the separate exponentiation and then taking the product of them. And in our paper, we have studied the Dimitrov's algorithm, but it is a bit outdated. And so the work that we are doing now is that we are studying also the simultaneous 2W area method, which consists in taking a window of side W of the exponent and then using a kind of a square multiply method. Then there is the Yen-Li and Lens-Tra simultaneous sliding window algorithm, which is an optimization in which you have a sliding window instead of a predetermined window. There is a binary GCD-based multi-exponentiation algorithm that is actually tolerant to what we call side attack channels. And there is modern solutions that we are investigating, which use parallelization. It is very recent, like two or five years recent. And so in the paper, we only talk about Dimitrov's algorithm, but as I said, it is a bit outdated. So one big issue when it comes to VDF used for blockchain is how can we generate the setup in a decentralized way? Because as we said, if someone knows the private key of the RSA group, then he can cheat and compute VDF instantly. And this is a major problem in VDF now. And it has been served as the subject of four papers published in the Stanford Blockchain Comparance this year. And one of the main papers is by Ligero, Tim, who created a fully decentralized way to generate a safe modules, everything modules. And because it is absolutely not a trivial problem. And finally, we have a discussion about VDF viability. For example, here we have a plot on the S-axis, we have the price in United States dollar that one can invest into like spamming the network with evaluating VDF and proof of work. And on the Y-axis, we have the speedup in throughput, which means the amount of VDF and proof of work that he can validate in a unit of time. The V dot are the proof of work evaluations and square are VDF. And so we have different colors for different kind of hardware. So what we can see is that globally, when you change hardware, you get a slower, like a lower speedup when you use VDF, then you will get for a proof of work, which is good because it means that investing in a very powerful hardware doesn't yield you that much more power, mainly because of the non parallelizable part of VDF, I explained, that you can make a lot of inner parallelization in the FPG axis. But what is more interesting is about the plane and NT dots that we have. The plane means the person only have one machine and the NT is that the attacker has one third and matching. And if the, and what we can see is that multiplying the amount of matching just gives you absolutely no incentive to know more power in the network. And then you have no incentive to make a race for hashing power or VDF power. What is important to understand in this is that here we only consider having to run iteratively VDF. I mean, you cannot start a VDF before you finished the previous one because you use the output of one VDF as the input of the next. Because of course, if you invest one third in one second machine, then you can run one third and VDF for different inputs. So it is up to the network and the protocol to manage this and propose a iteratively sequential VDF computation. And in a paper that we published at GLOBCOM conference this year, we proposed a rate control for like spamming prevention algorithm that we envision to use in the IOTA for which explicitly use this property of VDF. And a final word about research collaboration. If you want more information about VDF, there is an association which is called the VDF Alliance. It is a consortium of interest trial and research partners such as SEPRANational, SEPRANational Ethereum, Tegos, and more and more academic laboratories. And they have the website which is with VDFresearch.org and it is a useful way to find information about VDF. So this is the end of my presentation. If you have any questions, feel free to ask them.