 Hey everyone and welcome to the devcom blockchain village and We are here to present about doc on the title verifiable delay functions for preventing DOS or T-DOS attacks on Ethereum 2.0 So this is the gender for our doc and at the very end of our session if the time allows We'll try to pick up some of your questions and we'll try to answer as many as possible So wrapping up our quick speakers introduction, so here am I my name is Tejasura Sogi I'm a penetration tester of blockchain security researcher Founder of RazerSec, which is a community that helps people to learn about blockchain security I'm also a youtuber as RizaSharp. I post security contents on the YouTube and the most important thing I'm a cyber security enthusiast. I love the world of internet With that, I will ask my partner, Mr. Gokul Alex to say a few words about him Thank you Tejasura Hi everyone, this is Gokul Alex. It's my Pleasure and privilege to present in this Platform in DevCon In front of everyone of you and the founder of Epic Knowledge Society one of the fastest growing digital education platform for engineers and entrepreneurs in India We are a non-profit organization I'm also building a Blockchain protocol Integrating off-chain and on-chain data called fusion ledger. I'm representing a Collection of cryptocurrencies and platforms in India such as Algorand Eternity Elixir Hashgraph Horizon and Tezos. I'm also the director of Tezos India Foundation and I've been selected as the MVP for Hedra Hashgraph in 2018 I am a global leader for Exist Collective created by David Chom known as the godfather of cryptography and Blockchain technology and a programmer and poet in my personal life I've been a blockchain security auditor for government public sector and private sector across various countries. I am Actively researching on the convergence of quantum algorithms and post-quantum cryptography Over to Tejasura to get start with our presentation Awesome. Now, let's just start a session and before moving to the attack vectors and before work Let's get a quick refresher on some of the fundamentals. So let's talk about What is proof of work consensus algorithm? Now before that, let's talk about what a consensus algorithm now Consensus algorithms decide like who should create the next block and this decision defines The basic fundamental property of blockchain which is decentralization and decentralized control of power now Consensus algorithms is all about building fault already for Toselian machines for distributed computing Now our talk is about the convergence of cryptographic schemes and consensus algorithms Now consensus algorithms have both deterministic and probabilistic damages, right now consensus algorithms helps us to achieve consistent state and agreement between participating nodes in a blockchain network Getting back to our proof of work consensus algorithm Now in proof of work consensus algorithm, there are miners who compete with each other to For the valid potential valid block creation, right? Now they are setting factors involved like that the block header must be less than Set threshold and then there is a mining difficulty now mining difficulty defines like the current computational power of the blockchain network and It can also be updated at regular intervals to match the current computational power of the network and it also ensures that the block creation should happen at a certain frequency Now let's move on to some of the attack vectors are proof of work Now in front of your screen, you can see some of the attacks or attack vectors given Like the first one which says denial of service are distributed in a lot of service now Let's talk over this attack now As we have just discussed about the mining difficulty Which defines the processing power of the network and can be updated at regular intervals to match the To reflect the changes, right in the processing power now in most of the block blockchain These sudden changes in the processing power are not addressed by the nodes They can be addressed by the nodes at Schedule update Now let's take a scenario Let's take a scenario of like they are nine machines on a blockchain network and six of them are malicious Actors right six of them are malicious actors Now they are not doing anything and just chilling in the network. Everything is going as planned Mining it mining difficulties working as the way sure block creation is happening at a certain frequency and everything is perfect Now certainly what the malicious actors decide they decide to turn their machines down Now what will be the impact of this that? Now the remaining three machines they have to do the work of nine machines And what will happen or what will be the impact of this that the hash rate will decrease and It will decrease the block creation rate and eventually create a denial of service Right, so there are the attacks as well like routing attack. So routing attack consists of two parts first one is the partition attack which divides The blockchain network into two groups and then there is a delay attack which Tempers the property a propagating messages and sends them to the network Moving on and there are user wallet attacks Then there are smart contract attacks transaction verification mechanism attacks and there you can see the well-known 51% of Majority attack now. What is this 51% of Majority attack now? Let's take example like there is a group which is in majority a group of malicious actors which is in majority and They can take control over the blockchain network now how it can be possible As we know that these are raised between minors for the potential valid block creation Now if there is a group of malicious actors Which is in majority now It will have this group will have Higher and better probability to be For the block creation, right? so This is the 51% attack moving on there are some mining pool attacks like selfish mining foregofter with old and Like that. So moving on let's talk over DDoS on Ethereum Now there are some attacks that happened Ethereum or there are scenarios as well So we'll pick up some of them like there was an attack on parity client So what was the instant like some parity Ethereum nodes lost sync with the network because of the following incident Now what was the incident that if you send to a parity node? Block with invalid transaction what will happen? If you send a parity node a block with invalid transaction, but with valid header Borrowed from another block and what will happen that the node will mark the block header as invalid and Banned the block header forever, but the header is still valid, right? moving on There was a vulnerability that was found in parity nodes as well so in May Global hacking research collective SL labs Claims that only two tests of the team client software that ran on a team nodes had been passed Again, it's a critical security flaw discovered earlier this year The data reportedly indicated that unpassed parity nodes comprised 15% of all the scan nodes Implying that 15% of all the Therian nodes of our vulnerability or potential 51% attack So moving on and there It's an interesting underpriced DDoS attacks. Now, these are a group of attacks which exploited the vulnerability in the improper cast cost of Evm's op quotes now the op quotes were the ext code size and the suicide up code The suicide op quotes was renamed to self-destruct out by eipa 6. We'll talk about that in the next slide So let's just take the ext code size, right? Now prior to eipa 150 hard fork the ext code size op code only charged 20 gas For reading a contract by code from this and then driving its length Now as a consequence of what happened or what can happen? If that hacker send transactions to invoke a deployed smart contract with many ext code size op quotes It can cause a two to three times slower block creation rate Also, there was a similar Attack that exploited the vulnerability in the proper gas cost of Evm's suicide op code that was renamed to self-destruct as I just see and Also the empty code vulnerability in the state trial So the op code is meant to remove a deployed smart contract, right? And then send the remaining ether back to the account designated by the caller Now if the target doesn't exist a new account is created even though no ether may be transferred but still New account is created now what can happen like the attacker can create empty accounts and that was That happened like that The attacker created 19 million and new empty accounts by the op code at a very minimal gas consumption and Which wasted disk space and increased synchronization and processing time and cost kind of denial of service, right? moving on to Attack Attackers took advantage of the enterprise T-DOS Attack and that happened between the blocks mentioned in front of your screen Now what was the impact? It created millions of dead ethereum accounts and Also bloated the state database Now it also created tons of transaction phrases as well Moving on to the impact of D-DOS attacks the under price one and What was the impact that that can be hacks against go ethereum client? Hacks can be created again huge amount of cleanup transactions and each cleanup transaction creates last set of phrases there may be a possibility of hard folk being created and Even if the hard folk is created hard folks do not remove the data counts, right? moving on to some of the quick fixes to the attacks like Scanning an initial set of transaction can be done measuring the traces from each transaction and also checking the frequency of traces in the transactions Moving on to let's talk about the attack vectors on proof of stake now before Covering the attack vectors. Let's Do our quick Refresher again on the proof of stake consensus algorithm now proof of stake is a popular alternative to proof of work and proof of stake there are validators instead of miners and A validator is selected to forge a block also the validator selection depends upon the bound of Cryptocurrency a validator has or the amount of stake a validator has and also on the current complexity of the network and proof of stake has many advantages over proof of work or the first is obviously the Security it provides a better security mechanism against 51% attack now as we know like in proof of work owning 51% of The computational power meaning means a light gaining control over of the network, but and proof of stake Owning 51% of the stakes Would be very would be computationally very Expensive for an attacker. So that's how it provides a better security also it provides Better it's a energy efficient and provides energy savings now also as the Ethereum 2.0 this Ethereum is transitioning from Ethereum 1 to 2.0 and it is transitioning from proof of work to proof of stake and There is a Casper protocol Based on proof of stake it provides its own security implications like this slashing mechanism which provides slashing of stakes as a Penalty to the validator now if a validator Validates a block It has been awarded But if found that the validator is involved in malicious activity The stakes will be slashed as a penalty and there might be a chance as well that It can lose its privileges to take part in the network consensus So this was Casper protocol Moving on to let's see some of the attack vectors attacks against proof of stake you can see List of tax in front of the screen. There is a fake stake attack a very interesting attack now fake-stake attack can be think about as a denial of service an attacker can devote Notes valuable memory and CPU to a fake chain now We know there is a longest chain rule is that any chain can certainly become the accepted version of the ledger now validation of a proof in proof of stake is kind of complex now Required it require access to both the block header as well as the contents of block So I like to trick the stake value for the validator selection, right and Forcing the node to download data to validate fake blocks It consumes a valuable resources for a node and can eventually result into a denial of service, right? So fake stake attack can be think about as a denial of service and proof of stake Moving on to let's see that here in 2.0 architecture In front of your screen. You can see a beautiful picture Defining the road map to serenity Then there is the current blockchain Showing proof of work and proof of stake together and then you can see a little bit Detail architecture like there is a main chain based on proof of work. Then there is a beacon chain based on the cast by ffg protocol proof of stake protocol that we just discussed and Random number generation happens Now you can see that the beacon chain is cross-linked with the shard chain and Underneath there is execution engine as well Moving on to see some of the TM 2.0 audit findings talking about the constant audit first now constant in an audit as well and Founded some reports found it some DDoS attack vectors possible DDoS attack attack vectors and reported them like DDoS attack through creating a mapping between Public keys and validator IPs also did a DDoS attack on insecure gipc communication So what was the constant audit? Basically it involved ten engineers who examined the entire code base over the course of Two months they examined the beacon node logic Validator client flash a logic and almost everything So what were the findings they found it a lot of vulnerabilities some of them are like granularity of timestamps pseudo random number generation where two random number generation were needed and then Second pre-major attack on marker trace as well Now, let's talk about the least authority audit as well. So they did an audit over their subsections like they did audit over a block proposal system What were the findings like? single secret leader election as a keeps a selection secret and stops the leak of information to an observer While still allowing the chosen block proposal a fast way to verify to others. It is in fact the proposal With the information leaked past the block proposal remains as protected as it is It would be input for work genes, but without the computational overhead Talking about the findings on the gossip protocol now gossip protocols generally suffer from the spam problem With our recent like just it can be difficult to understand whether a message is legitimate or is a spam That is meant to clog a network This was one of the primary concerns in examining the network layer In a 3m 2.0 when a node proposes a new finalized block the block must be sent to the rest of the network There is an issue with a when a decent honest node is capable of sending an unlimited amount of Older block messages to the rest of the network with minimal penalty Which allows them to? overwhelm the network and block legitimate messages Also, they were finding on slashing messages that we just discussed earlier the cast per protocol security implication Now there is a small loophole as well that allowed our node to send an unlimited amount of these types of messages With minimal penalty Causing the same message blocking if they send enough of them Now moving on to talk about DDoS on in 3m 2.0 Let's take the recent we happen a take who attack scenario now they were security engineers who Did a research over public antagonist? They found a take who attack we did a take who attack now What was the scenario like two of the four take who nodes were targeted by five ordinary machines with a sustained DDoS attack? Now what was the impact that initial loss of finality was achieved with two or three machines? But the others joined with a few epochs to ensure that the network could not recover Now what was the implementation? You can see Come on in front of your screen So what the command does it pipes the null byte from Dave's view to the PV command Which rate limits to a somewhat arbitrary value high enough to prevent finality but stay off EWS and IPS radars to ensure the attack continuous Also, the data is then piped back into the net get command which sends it to the nodes under attack The while loop is there for when it loses connectivity and command compiles due to container restarts And moving on to what was the impact of this take who attack The effect that the dinner of service that had on the attack net was a prolonged loss finality and Required manual intervention to restore the network to a healthy state once that act stopped Now the notes and the attack use large amounts of memory was subject to multiple container restarts had troubles staying connected to peers and Even once no log a local clock was like 20 minutes slow I'm moving on to the take what I have root cause now firstly responding to one byte pin multiple bytes is a vector You know for various Amplification attacks that you know, but that was not the issue that caused the loss of finality The second issue is that the responses were being written faster Then they were read by the attacking fear and the JVM lip P2B was not applying any throttling Eventually the TCP pressure back pressure kicked in filled up the OS right before us and the responses wound up being cured in user space memory This pushed up both the on and off heat memory users subject very substantially Also CPU spikes significantly partly due to the processing power All those due to processing all those multi-stream messages, but mostly because of the resultant memory pressure and GC activity Now talking about what can be a solution is like They can be a state power solution like disconnect the PR immediately Then an embedded example like zero land multi-stream methods is received And now also there's a receding us to dots attack as well Which increase significantly as you increase the number of nodes and diversity of clients locations and network connections So this was the take what act we covered Consensus algorithms and we talk over proof of work proof of stake. We Took different attacks attack vectors as well. We talked over 51% attack denial of service We have seen the item 2.0 architecture also the security implications of Casper protocol Now my partner, Mr. Gokul Alex will connect with you all the links the PDF randomness and DDoS and We'll connect all of them and present it to you. So but you sir Thank you. They just work for setting the context for Researching on the randomness in Ethereum 2.0 We will see how randomness is very crucial to the overall Engineering and overall design of Ethereum 2.0. If you look at all the current Implementations be it on near protocol prismatic labs the beacon chain There is a crucial play for randomness because all the sharding designs today relay on some source of randomness To assign validators to the sharp both randomness and the valid validator assignment require Computation that is not specific to any particular shard for this computation Existing designs have a separate blockchain that is tasked with performing operations necessary for maintenance of the entire network Besides generating these random numbers and assigning validators to the shards these operations also include receiving updates from shards and taking snapshot of them This is very important the shards and taking snapshot If you know the shard is nothing but a database activity in all the times We used to take editions cut editions from the databases the snapshots are the read-only versions and also when we process the stakes and slashing in proof of stake systems and when you have to Plan the proximity of shard with the lead leaders and rebalancing the shard These randomness so sharding staking and randomness has to work together when you look at all this Implementations whether it's cosmos have been cosmos or religion polkadot or beacon chain in Ethereum and near This is the overall architecture But we have to understand We cannot afford any simple randomness. We need distributed randomness on blockchain That is why randomness in blockchain becomes much more complex and more challenging so Many blockchains if you know blockchains have different purpose of using randomness Maybe it's for time-stamping or it could be for games like lottery games Or it could be for proof of replication like in Filecoin or in the case of Ethereum should be a chain for selecting participants What happens if a malicious actor can influence such a source of randomness? They can increase the chance of being selected and possibly compromise the security of the protocol and Like we said distributed randomness is very crucial building block for any kind of applications on blockchain What are the essential properties of randomness required in a blockchain setup? There are three essential properties laid out first it need to be unbiased We cannot afford a biased source of entropy in other words No participant shall be able to influence in any way the outcome of the random generator Second it need to be unpredictable in other words No participant shall be able to predict what number will be generated. Oh, what are the properties of it? Thirdly the protocol need to tolerate some percentage of factors that go offline or try to intentionally Install the protocol like the conventional crash fault tolerance and bicentene fault tolerance Now we will look at some of the existing approaches one of the first approaches the random approach Which is the acronym for random Dow the general idea is that the participants in the network first of all Privately choose a pseudo random number submit a commitment to the privately choose a number this commitment could be implemented as Polinomial commitment or a Pedersen commitment all agree on some set of commitment using a consensus algorithm Then all reveal the chosen numbers Reach a consensus on the reveal numbers and have the XOR of the reveal numbers to be the output of the protocol This is a pretty straightforward approach But there are some limitations It is fairly unpredictable But at this and it has liveness We know liveness and safety are two Irrequired properties of any consensus mechanism. This provides a great degree of liveness however a malicious actor can observe the network once and Start to reveal the numbers and choose to reveal or not reveal the number Based on the XOR of the number they observed This allows a single malicious actor to have one bit of influence on the output and a malicious actor Controlling multiple participants have as many bits of influence as the number of participants. They are controlling so there is a proposal and There is a strong support in this direction which is to blend random o plus vdf We will talk about vdf in detail. First, let us understand this approach To make random unbuysable One approach would be to make the output not just an XOR But something that takes more time to execute than the allocated time to reveal the numbers If the computation of the final output takes longer than the revealed phase the malicious actor cannot predict the effect of them revealing or Not revealing the number We are talking about a computation Which is sequential which cannot be paralyzed by a powerful malicious adversary Such a function that takes a long time to compute is fast to verify the computation and has a unique output for each input It is called as verifiable delay functions and the design is extremely complex, which we will talk in detail What is ethereum's perspective on random dao plus vdf? You can find numerous conversations on this in ethereum research led by justin drake and others ethereum presents a plan to use random with vdf as their randomness beacon Besides that fact this approach is unpredictable and unbuysable. It has an extra advantage that it has Liveness, even if only two participants are not only this is a very interesting property It requires very very less live participants compared to any other approach For the family of vdf linked above a special asig It can be 100 times faster than conventional hardware That is why there is a strong effort and investment in ethereum currently to build asig hardware for vdf's So if the reveal phase lasts only 10 seconds, for example Vdf computer on such an asig must take longer than 100 seconds to have 10x safety And does the same vdf computed on the conventional hardware need to take 100 into 100 seconds In three hours hence there there comes a dependency on a hardware. That is the Challenge because of this approach or the trade-off in this approach. Now, let us look at an yearly approach, which is fairly Very popular this approach is pioneered by DFINITY in fact More than one year back or two years back DFINITY is to use BLS signature. What is BLS signature? It is known as BONE line Shazam signature One of the authors of this cryptographic scheme BENDLINE is actually a member of DFINITY What is this BLS signature? This is a scheme allowing a user to verify that a signer is authentic This scheme uses a bilinear pairing for verification We know about bilinear maps in elliptical cryptography how powerful and formidable they are So this is a signature scheme built using bilinear pairings BLS signature is a construction Which allows parties to create multiple parties to create a single signature on a message Which is often used to save the space and bandwidth By not requiring sending around multiple signatures This is a very interesting property and hence, BLS signatures are becoming very popular in more cryptocurrencies and blockchains A common usage of BLS signatures is in signing BFT protocols For signing blocks in BFT protocol When we look at threshold signatures in practice Say there are 100 participants to create blocks And a block is considered final if 2 by 3rd of them 67 of them sign on it They can all submit the parts of the BLS signature And then use some consensus algorithm to agree on 67 of them and aggregate them into a single BLS signature Any 67 parts can be used to create an accumulated or aggregated signature However, the resulting signature will not be the same Depending on what 67 signatures are aggregated It turns out that the private keys that participants use are generated in a particular fashion Then no matter what 67 or less signatures are aggregated The resulting multi-signature will be the same This can be used as a source of randomness The participants first agree on some messages that they will sign It could be an output from a random or it could be the hash of the block It really doesn't matter as long as it is different every time And is agreed upon So here we are blending both randomness and an agreement To create a multi-signature on it But this has some limitations The approach as we have seen earlier This is even though this is unbiased and unpredictable And is live as long as 2 by 3rd of the participants are online Though this can be configured to any threshold While this 1 by 3rd offline or misbehaving participants can stall the algorithm It takes at least 2 by 3rd participants to cooperate to influence the output This is a big overhead when you consider a live blockchain system Private keys in the scheme need to be aggregated in a particular fashion And this process is known as distributed key generation And this is something which is an ongoing area of research And there is another interesting approach Called ran share approach Ran share approach is an unbiased and unpredictable protocol Which can tolerate up to 1 by 3rd of actors being malicious It is relatively slow And the paper linked also describes two ways to speed it up Called ran hound and ran herd But unlike ran share, ran hound and ran herd are relatively complex When you look at the details in the paper, you can understand The general problems of ran share, besides the last number of message exchange It requires a lot of back and forth communication between the participants In fact, o to n cube messages are required So this is kind of a gossip The fact that 1 by 3rd of the meaningful threshold for liveness is only there But it is low for the ability to influence the output So the challenge, one is the overhead because of the number of communications required And second, the number of live participants required So this is the challenge The benefit from influencing the output can significantly outweigh the benefit of sprawling randomness Am I audible now? Can you confirm? Can you confirm? Am I audible? Yes sir, you are audible Okay So let us continue The benefit from influencing the output This is one trade-off when we consider randomness design So either the adversary can stall the randomness or influence the output So this is what we have to see in ran share approach If a participant controls more than 1 by 3rd of the participants In ran share and use this to influence the output, it leaves no trace This is a challenge This malicious actor can do it without even ever being revealed Stalling a consensus is always visible So if somebody stalls a consensus or creates a fork or disrupts the network This is visible But if they influence the output, that leaves no trace So that is the two aspects of this challenge And then there are situations in which someone controls 1 by 3rd of the hash power Having a very powerful machine A sick miner or some kind of a very powerful staking pool So we cannot always say that that is an unimaginable or impossible situation And hence this ran share approach creates some kind of a doubt or uncertainty So we should look at the near protocol approach in this scenario What near protocol has done? Near protocol is one of the first to come up with a sharding implementation for ethereum 2 Like a candidate for ethereum 2 And now they are building the protocols A separate protocol each part So what happens in near protocol? They have come up with an approach called erasure code Let us understand what is this erasure code Each participant comes up with their part of the network Split it into 67 parts So this erasure code it is generated to obtain 100 shares such that any 67 are enough to reconstruct the output And then they can assign each of the 100 shares to the participants And encode it with the public key of that participant Then they can share all the encoded shares Participants use some of the consensus It could be any little base protocol like tendermint to agree on such encoded set Extracted from the 67 participants Once the consensus is reached Each participant takes the encoded share each of the 67 sets published this way That is encoded with their public key Then decodes all such share and publishes all such decoded shares at once Once at least 67 participants did in the previous step All agreed upon the sets fully can be decoded and reconstructed The final number can be obtained as an exor of the initial part of the participants Which came up in the first step The idea why this protocol is unbiased and unpredictable Is similar to the ran share and threshold signature approach The final approach output is decided once the consensus is reached But is not not anyone until two by third of the participants decrypt Shares encrypted with their own public key So when you look at all these previous approaches On one side we can see the verifiable delay functions Very solid with cryptographic technique and threshold signatures Also a cryptographic scheme We can see that randavo, ran share and the erasure code from near protocol These are all specific consensus based approaches So one side we have threshold signatures, verifiable delay functions Other side we have ran share, ran hound and ran herd Now let us look at verifiable delay functions in detail What is an anatomy of a VDF? A VDF consists of triple of algorithms Setup, evaluate and verify A setup takes two parameters lambda and t The security parameter lambda and the delay parameter t And it generates a public parameter pp Which can fix the domain and range of the VDF And we can add some additional information necessary to compute or verify it The second phase is evaluation phase Where you take the public parameter from the previous phase Add an input x from the domain And generate an output value y in the range And generate a short proof pi Finally in the verification phase You use the combination of public parameter x, y and pi Efficiently to verify that y is the correct output on x So crucially for every input x there should be a unique output of y When you look at the historic inspirations to VDF We should first look at the early works of Cynthia Dwork and Moninova in early 1990s Who suggested using squaring routes over finite fields Finite field-based puzzles as functions which take a predetermined time to compute And are very straightforward to verify Incidentally the same Cynthia Dwork and Moninova has inspired Satoshi Nakamoto In his pioneering work on peer-to-peer cache in bitcoin However their work was considered a practical done Because one has to use rather large finite fields to make algorithms useful And libraries for handling multiple precision arithmetic At the time of the suggestion was order of magnitude slower than the current ones So we can see that even lot of promises on elliptical curve cryptography Which uses finite fields came early in the early 2000s What are the essential properties of VDF? Now we have seen how do we construct a VDF What are the essential properties? Firstly it has to be sequential Honest parties can compute y and pi given we have public parameter and x Nt sequential steps While no parallel machine adversary with a polynomial number of processors can distinguish the output y from random input significantly in fewer steps So this is very important The output should be indistinguishable with any other randomness Even if the adversary has a polynomial number of processors So that they can compute they can execute all of them In a very efficient polynomial time They should not be able to distinguish this output from VDF from other randomness And it should be efficiently verifiable The verify operation should be as fast as possible for Honest parties to compute So the verification time should be as small as polynomial logarithm of the time taken to Generate it or construct it and it should be unique As we said before for every input x it should be difficult to find an y for which The same combination of public parameters x y and pi gives the same output Now there are some more additional properties of VDF very important for actual implementation One of the first one is it should be decodable a VDF is decodable if there exists a decoding Algorithm such that the combination of evaluate and decode form a lossless encoding scheme This is very much required for a practical use of VDF and it should be Hi sir, sorry for interrupting Yes, sir we can take like 10 more minutes because it's already like just 30 so yep Yeah, I'll move fast. So the second property is the actual property Which will help us to compute the output with zero knowledge proofs or any recursive proofs And the size should be smaller and the main innovations if you look at the main VDF constructions are The first construction is in the original paper by Dan Bonet Benfish and others in 2018 which uses injective rational maps But however, this is a very complex implementation and hence even the author said this is a weak VDF Later two other approaches were proposed by Paisark and other by Veslovski independently Arriving at extremely similar constructions but more practical by using repeated squaring in groups of unknown order these constructions are based on modular exponentiation arithmetic where Paisark and Veslovski suggested to iteratively compute squaring in RSA groups with a large prime modulus innovations in VDF One very crucial difference in VDF with others is VDF has a setup phase Then the setup phase we set the public parameters for configuring the VDF So any node who need to solve the VDF will use the public parameters And some VDF also allow generating a proof so that you can use this in conjunction with the computational proofs of integrity And so and we can use it in a multi-party computation setup Now let us see how we can use VDF for DDoS prevention The one of the first approach for using VDF for DDoS prevention was from iota iota is a iut ledger Which uses tangled consensus, which is not strictly a blockchain But they have come up with a lot of innovations in cryptography in the past like They have adopted window needs one-time signatures. So what iota has implemented is a DDoS prevention mechanism iota is actually an iut ledger with heterogeneous iut devices So they propose to use VDF for VDF has a DDoS prevention mechanism where nodes are required to compute exactly the prime modular squaring for an input message Calibrating the VDF evaluation on different hardware and optimizing the time need to verify the correctness of the puzzle through multi-exponentiation techniques. That is what they have done So what they have done is they have used different kind of hardware like laptops fpjs and different iot Hardwares like raspberry pies and different other devices and then they optimize the time They continuously calibrated what would be the time for constructing the VDF Evaluating the VDF and verifying the VDF So they compared it with for their purpose actually iota doesn't have a system like a proof of stake hence their concern was How to use it instead of proof of work and that's how they use VDF So how did they use it in the evaluation phase every node decides to generate a transaction and All those nodes need to solve a VDF such that the input is the hash of the transaction Issued by the same node And the proof that they generate the node also generated proof to facilitate the verification task Which gossip along with the transaction. So they share the proof along with the transaction Verification happens when a new transaction is received The node a particular node verifies whether the VDF is solved correctly by the node which send the transaction If yes, it forward the transaction You know in iota it has some kind of tipping mechanism or forward mechanism There every node which receives an input has to send it to two other nodes. That is how the tangle works Let us look at how VDF integration happened is implemented in iota. There are evaluators And evaluators evaluate and generate the proof from the input to the VDF They use both hash and the VDF and then when they generate transaction transaction Also will have VDF in it and then the verifier will verify whether the VDF is accurate Which they can do it fast if it is true They broadcast to neighbor if it is false they discard it We can configure the difficulty and the modulus in this VDF implementation iota This is an important property which is mentioned VDF difficulty can be configured That determines the number of sequential operations to solve and update the VDF The second one the are the prime modulus the RSA modulus Which we use for the RSA based groups of unknown order And the third thing they also use a cryptographic hash function Now let us quickly talk about our approach Our approach is based on a closer view at what is required for Ethereum 2.0.0 And what are the current capabilities we have in 2.0.0 like sharding, stake and random oracles And also the presence of different proof zero-knowledge proof implementations But one key difference what we would like to propose is We are proposing VDF based on super singular isogenic cryptography Not based on the RSA based unknown groups of unknown order approach or adiabatic Adaptive root assumption approach of Bistrak and Wieslow scheme We also wanted to bring in an approach known as single secret leader selection And we want to use this VDF for selecting a secret leader a single secret leader From data shards each string and we also want to use them in conjunction with random oracles And we want to authenticate the nodes participating in stake mechanism Using VDF based delay authentication, which is also proposed recently And then we want to use randomness, ran share and ran hound powered beacon chain So this is how we visualize our approach. We use isogenic based VDF generator And these VDF generators will be used to create random oracles These random oracles will source to the shard chains with ran hound and ran share And that's how we decide which shard to move in the proximity And then the shards will be part of the beacon chain Where we will have a secret single leader selection and delay authentication. And then Finally It will be added to the main chain based on Sorry again for interrupting Adisa is saying we can like conclude our talk so we can just quickly wrap up and just talk about the isogenic Yeah, I'll just quickly show what we have already done a prototype on this random oracle using the VDF from the stock where you can see the code in our repository And we also giving the highlight of the single secret leader election mechanism You can find the research paper which is substantiating this again from Dan Bonet And the node topology and please let me hand over to They just want to talk about the beautiful aspects of isogenic based VDF quickly So, hey everyone, I will just quickly wrap up our presentation and We are just in more finale. And I will just quickly talk about the isogenic based VDF Now what is an isogenic? It is based on the idea of Elliptic of cryptography and that the element key exchange So in isogenic there are a group of elliptic curves So you can think about that there are two elliptic curves even an e2 We can Do a mapping or a function of like let's say point p from elliptic of even to point q at elliptic of E2 and this mapping is called as the isogenic Now our secret here is then the isogenic here and the public key you can think about as the elliptic And we can do a secret shear exchange by mixing our secret key with other sides elliptic curve Next slide Next Next moving on So, yep, let's see our quick construction. So we'll set up We'll take our prime number n we'll take our super singular curve e and performing a random non backtracking Work of land t have as outcome the isogenic phi and its dual as phi dash I call it as phi dash now choosing a point p Will compute the isogenic of the point p and they will be output as isogenic dual The elliptic of e and e dash the point p and the isogenic of point p next This is how isogenic look like mapping between two elliptic curves next And the final slide The how the evaluation evaluation and verification works in an isogenic based video. So for the is evaluation If we are receiving a random point q We have to compute the isogenic dual of That point q and for the verification part We will Like refer to our previous knowledge of the slides, which were field pairings So how the verification can be done? It's just simply a field pairing of the point p of the point p with the Isogenic dual of the point q and that should be equal to the field pairing of isogenic of the point p and At the point q so having said that we are done with our presentation and I will hand over to goko sir to say the final words It's a great honor for us to present our compilation of thoughts on applying powerful properties of randomness unbiased randomness and entropy for the overall transformation of the Consensus and cryptography and to make ethereum 2.0 Much more stronger and make sure that every chance of dDoS attack can be prevented from the Sharding shard to the main chain itself overall So this is a randomness engineering that we propose We have done our first level of prototype We would like to move to the next level by integrating isogenic vdf from The sids library and ss isogenic library And then we would also want to blend single secret leader selection And we would like to do it on the randavo We have already participated in the starkware vdf hackathon. We have used a Vido vdf from starkware and build random oracles and update on randavo now. We are Excited to go forward. Thank you so much again for blockchain village defcon for giving us this opportunity. It's an honor It's a lifetime opportunity for us. Thank you so much once again looking forward to your questions and comments