 Okay, guys. Hello. I'll try to be fast. Hopefully everyone can hear me, see my screen, otherwise manifest something. Okay, cool. So I'll start by giving a quick intro to the project. Probably you've heard it if you were there at the last demo, but maybe you've heard it. So basically Project Pikachu, the aim of this project is to check point the state of the Filecoin blockchain into the Bitcoin blockchain. We want to do this periodically and the motivation for doing this is that proof of work gives security guarantees that blockchains such as Filecoin or blockchains such as proof of stake do not give. So basically they are two main components in our protocol that does this checkpointing. The first one is the distributed key generation. So what we want to have is to have all of the Filecoin miners create an aggregated key and then basically this will be the key that they will use to sign checkpoints onto the Bitcoin blockchain. And so basically the second step of the algorithm once this key is created is the sign. So they do this aggregated key. They use threshold signatures so only a threshold of them needs to be honest in order for the checkpoint to happen. And then they will be like signing spotting these checkpoints. And then the idea is that if we use the data that is inside the checkpoint, we can use some storage system for example like BFS or Filecoin in order to retrieve some information about the Filecoin blockchain. So that's kind of like the very, very high level description of the protocol. So next, next let me tell you what's for those of you who were at the last demo. If you remember at the last demo because we are still in a proof of concept slash testing mode. We were using Minio instead of using a decentralized storage provider. Again it was mostly for testing purposes but the good news is that for this demo we have removed this and instead we are using a Filecoin KVS. And when I say Filecoin we are for again for testing we are not using the actual Filecoin blockchain. We are using Udiko which is a fork of Lotus and basically is kind of like a tested ground for the Filecoin blockchain. In addition to this in the last demo we were using one distributed key generation that came from the frost paper and since then we have changed our distributed key generation and we've kind of like upgraded it and now we can tolerate failure. In the previous demo we had like if there was one malicious participant the DKG would like abort and would not complete. Now the DKG will complete even with the malicious participants and basically I'm going to show you that in today's demo. So let's get to it. I'm going to start by launching everything. So again we are using Udiko so this is the kind of like playground for Filecoin. So what I'm going to do first is I'm going to add the initial miners. So here what's going to happen is that these miners basically again as in the previous demo we use like fake power. We don't use like real real storage. So we are so we are just simulating like miners and the first step of the protocol that they do is that they do the distributed key generation and then they have their keys and now that they have their keys they can check and they're going to do that every 25 blocks. So here we have one one checkpoint that has just happened. So let's and we're going to soon have another one. So let's wait for this in a few seconds. Okay so now that's another checkpoint. So now what we're going to do is that we are going to add another node and as I have hinted to you we are going to add another node that is malicious. So okay so first we let this node sink with the rest of the player and now again we are going to do as we did before we're just going to like add the fake power of our malicious miner. So now here you see that basically you see that complaint here. So what this shows is that the miners they have spotted that Don was malicious. So they have sent each other complaining like wait this guy is doing something wrong. We don't want to keep going with him. So we are just going to finish the protocol on our own and basically that's what they do. And so here you see that they just like continue signing but however Don will not be included in the rest of the checkpoints. So let's wait for another checkpoint. Here we go and here we see that Don didn't didn't do anything because he's not part of the new set of mine. Okay so now again what we're going to do is we're going to go and check the checkpoints on the on the Bitcoin testnet network so we can see we can see what's happening. Okay so here let's me let's copy so that's basically the transaction idea of the of the checkpoints that Don has retrieved when he joined the protocol. So I'm going to go on the Bitcoin testnet explorer. I'm going to put this CID in and then basically we see like the checkpoint transaction and from here what we can do is we can just follow the chain of transaction and see like every checkpoint that has happened they are linked like this way. So that's pretty cool we can just like follow the state of the fine-point chain using Bitcoin and we see that all the transactions are unconfirmed because we've just made them right and here we arrive to the end and now you know what let me look more closely at the at the transaction and here we see that we have some data so like the checkpoint and this data can be used to retrieve information about the fine-point chain and here you can see you can see here that unlike last time where we were using Minio, now we are using a KVS that is like integrated with fine-point so we can see that here Don when he joined he could get the data from the KVS basically using this. So that's it for me. I don't think we have time for a question but feel free to ask them on Slack on our consensus lab channel or in the chat here on this day. Let me share my screen. All right everyone can hear me right? I guess you can see my screen. Let's try and go fast. So today what I want to show you is what we call Eudico Garden that in the end is just a bunch of scripts so we have Eudico what what Sarah just showed where we've been working with it locally we've been implementing in it hierarchical consensus but as we are moving into production or we are trying to put hierarchical consensus into production we needed to try things in a real environment. So that's how Eudico Garden came up where what we have here is just a bunch of scripts we that with Terraform I mean the requirements are Terraform and AWS CLI so it's quite limited right now and a bunch more you are able to deploy an Eudico network with fine-point consensus as a route. So the here in this with me you'll see all of the scripts that will help you deploy your Eudico network and your Eudico Garden in case you're you want to play with it you have like a deploy script that will run a set like it will spawn a network with a number of genesis miners in the rootnet then you can add new miners these miners are not like these are full nodes not miners if you want to mine like you can onboard new power we will add a script soon for that also and like the reason why we wanted this is that up till now and like actually Sarah has just shown a demo we were using like dummy consensus algorithms in the rootnet and if as we are moving into again my introduction and trying to deploy this into fine-coin we wanted to know how like all of our processes and all our our hierarchical consensus framework was affected by having as low consensus as as fine-coin in the rootnet. So we've been running a network for actually I think this one I started this morning and when I want to show you now is like how we have this this network running with fine-coin expected consensus in the rootnet and how we can have different subnets and we are going to spawn a new subnet with tendering which is also a new thing that we recently that Dennis recently implemented which is support for tendering consensus in subnets so the first thing that I'm going to do is here start let me see if works yeah I'm good I'm I just started a tendering cluster that I'm going to use for consensus and I'm going to go to so I have like this session attached where like if we see so you'll see here that I already have a bunch of subnets because we'll be implying like all day with it each with different states each with different five coins like SQL in supply in there in their subnet and what we're going to do is to add a new subnet that we're going to call for instance tendermint with a tendermint consensus this may be a bit so the tendermint consensus is number two and the parent we want to do it from the root so from the fine-coin consensus as you'll see like this is a bit slow because um so we what we're doing here is we are deploying an actor that will represent all of the logic for the subnet and governing policies for the subnet and in order to check the the right ID and like the actor has been deployed we wait for at least five epochs of finality so that's why until like we don't see five epochs from the fine-coin consensus we like this process won't change but in the meantime like I can show you I can start interacting with one of the subnets that I have over here so um I let me see I have here um so this is a proof of our subnets that I just deployed and what we're gonna do is we're gonna spawn a subnet in a subnet to see that how fast it is once you use a faster consensus and to to make a case of what we need why we need cracker consensus once we have our retarget computation so um we're gonna do the same in this case we're gonna add a proof of word consensus but instead of using the root chain as a parent we're gonna use this t0 1 1 0 and you'll see that while in the left we are still waiting in the right it should be spawned in no time so you see that we we have deployed the actor that is gonna govern the subnet and now what we're gonna do is we're gonna join this subnet uh so that we can start mining it so this also should be quite fast okay now we have like the tendermin the actor that will govern the tendermin from the rootnet we have it over here and we're gonna start like we're gonna do the same first join it and then uh and then start mining so that you see a subnet mining some uh like with tendermin all right so here in the right sorry I'm I'm driving you crazy but like it I mean this is gonna take a lot of time so that way I can like paralyze the demo um so here in here in the right like you see that we created the the subnet the new subnet so now if we list the subnets but instead of so here you see the list of subnets from the root right if we list the subnets from the the the sub subnet from which we in the child subnet from which we created you see that uh it's not subnet API you see that we created this new subnet and that we have some state here we could even like send some funds to the subnet and interact with it even if it's a sub subnet we could also check like how our checkpoint to show you that this has been running for a while I can show you that here we are periodically yeah so we I need the the comment from the left to finish in order to show you this but I wanted to show you like the list of checkpoints that we have committed so far for these for this network we could also check the the list of checkpoints for any other network which probably I'm not syncing with them but yeah all right so now we added the new subnet and we're going to start mining some for the we're going to start mining some tendering and uh so we started you see here TM consensus this is these are the logs of the of the so you'll see here like TM consensus preferable consensus like all of the mining of the other logs from the mining of my subnet I'm finally like before I leave I want to show you how I sent some five coins to these tendermy subnet in this case again it's it's slow because we're going through fighting consensus it would be way faster if we had some other consensus that's why like we do child subnets from the subnets that we have so that we can go faster and so wait subnet api our subnet was called seven and here like eventually here you'll see the 10 five coins that I just sent from the root chain from to the subnet and yeah but in order to leave some room for the rest of the of the demos I'm going to stop here if someone is interested in using a duke garden of having access to this environment that we have currently running feel free to drop me a message and I can give you access to one of these notes thank you very much so hello Romani I think it's my turn now thanks for the previous demos so uh for those who don't know me yet I think there's quite some of you I'm Matei I'm from consensus lab and I'm working on a consensus algorithm that is fast and scalable for being deployed in the subnets that Alfonso was just showing so let me share my screen I hope you see everything so I will be talking about mere bft which is scalable consensus implementation for everyone not just for the subnets hopefully and uh since this is a project that most of you haven't seen yet I will first spend a few minutes on introducing it say what it is how it works and how it can be used and then I show a little demo of how it can actually be used for for an application so mere bft is a framework for implementing distributed protocols that that has a focus on consensus protocol but ideally uh should be able to implement any kind of uh distributed protocols or let's say a wide a wide variety of consensus protocols it is available at uh on github and it is part of the consensus lab widely project the which is the project about scalable consensus and you can look at more details about the project also in the link actually I will post in the chat when I find the chat window here is the chat I will post the link to this presentation so you can look at the presentation and click on the links okay just a little uh just a little heads up the name mere bft and the location github might be updated in the very near future so stay tuned for that don't focus on the naming for now all right so how does uh the implementation of the framework work basically a distributed protocol uh always has some nodes that interact and they send each other messages and uh they collaborate to perform some some task in common so the basic abstraction is the node and every machine that is running the protocol instantiates one node like this and the the implementation is as modular as possible so the node basically just provides an internal mechanism for different modules of the node to communicate with each other with each other and uh for each performed to its task so there is in there is some application module there's a module that actually contains the protocol logic there is a module that takes care of the network communication like sending actual messages on the network there is a module that stores the payloads of the requests that are being agreed upon in the consensus protocol and there are some other modules that uh are not really that important for this explanation now and uh once you instantiate a module you have three functions that you can run on it it's that you can call one function is run which starts all the machinery and the processing that is uh this is necessary for the for the nodes to function there is the submit request function so for a for a consensus protocol when somebody wants to when a client wants to submit a request for ordering for for agreeing upon they need to call the submit request function and insert a request in the in the node and the status function is just for debugging purposes we don't need to know too much about the node now the node itself is just basically implementing a slightly fancier event loop that is getting all the events produced by by the modules storing them in a buffer and then processing them and distributing uh those events to the modules where they should go each module then processes whatever events uh it needs to process potentially creating more events and so on and so on so this is just an event okay so this is the very high level architecture of it now uh how do we use that so this is an excerpt from the code how how MIR BST can be actually used to to implement a distributed application and to offload as much as possible from from the programmer such that the programmer can just implement the application or the protocol they need without worrying about too much more so let me show you a few lines of the code now so if we want to implement a simple chat application where everybody running a node can will participate in in a in a in a group where they can exchange messages we need to implement the logic of the chat application and this is a very simple one we have a chat application here and the only state the chat application has is an array of messages that that is totally ordered from from each participant and it needs also the reference to the request store which is a request or module so it can actually access the payloads of the messages that that are being sent around and so if you want to implement an application if you want a distributed application with MIR BST you need to create you need to create an object that implements an interface that consists of only three functions apply which receives a batch of requests and whatever the requests are they just get applied to the state so in this concrete case we just cycle through all the requests in the batch and what we do we create a chat message we print it is a client so and so send message so and so and the message is just the request data and we append it to the list of messages that that the application has and then in order to be able to restart and catch up with the state we need to the application needs to be able to create a snapshot which is simply serializing all the state in an array of bytes and it needs to be able to restore its state from such an array of bytes which is not that important for now all right uh so how do we actually do it as we saw as we saw here we have a node that has several modules and this is exactly how it looks what it looks like in the code so first we create some modules like the networking module which the library the mirrored library has sub packages that that actually provides some implementations of of those modules so we have a grpc-based network transport module we have a request store for now we just use a volatile volatile request store also provided by the implementation itself and we need to also tell the node which distributed protocol it actually should be executed so which protocol logic there is in this case we use the only protocol that is being implemented it's not even yet implemented it's it's quite stubby but it already can be used for for the demo purpose so it's ISS it's a total order broadcast protocol it's a consensus protocol and we create some configuration for it and we create a protocol also using a library function because the ISS package is provided also by the library and then we assemble the node the same way that as was shown on the slide we create a new node we give it its own id we give it some configuration parameters and we say which modules it should be using it will be using the net module the request or the protocol module that we just created and we just tell it what application should be therefore processing the agreed upon request the crypto module as you can see it also needs a crypto module we only have a dummy crypto module implementation step but this will change soon hopeful and then then we we create some we create some other boilerplate code for actually passing the requests to the to the implementation and we read we read messages from the command line and we submit the requests to the to the node so how does it work i already prepared for a deployment of four nodes and basically we just run the chat demo application here which is the main file i was just showing it executes the main file they were just showing one is with it one is with id zero one is with id one id two id three so i let me start all of them so they all initialize they connect to each other and i put i say i press enter once more on client two that's why everybody already sees that client two sent an empty message but so basically when i type in some message some low message i i press enter what happens is that it creates a request for the for the total or the broadcast system it submits it to the to the node the node agrees on receiving that request and all of these deliver the request to the chat application which which prints it on the screen now and given the implementation of the protocol all these will will be in total order so if i really quickly if i really quickly typed something in different windows like i would have to be very fast manually it's not possible then everybody would receive the messages in the same order because they're totally free now this is a demo application but the the same principle applies to the consensus protocol implemented in the subnet and that's the the goal for the for the next months to actually make this part of the subnet consensus protocol so that's it for the first demo of this thank you very much and i leave the floor for the next demo let me share my screen cool so i basically want to announce a new project which has reached milestone one and is ready for production news so this is called the advice decentralized protocol compiler and i'm going to post the link in a little bit so this is meant to be universal language for specifying protocols and by this i mean that it's universal in the sense that it is both agnostic to the programming language in which different implementations use the protocols as well as it is independent of the way the protocol is serialized on the wire so it should be able to support any serialization so that includes all the ipod technology that we typically use all the serializations that ipod supports but also any other protocols like protocol buffers flat buffers legacy protocols like bit torrent and exotic protocols whatever so the big point here is that the language is meant to have a a very simple and flexible type system that can describe any pre-existing and future protocols and is also meant to enable writing protocols that are very easily extensible with maintaining forward and backward compatibility so very briefly from the front page of the project you will find all the documentation linked in the first piece of information is the roadmap for the project we have completed milestone one the roadmap captures essentially quite a lot of the scope that we plan to cover the i will talk a little bit more about the first milestone in a second so the first milestone essentially is establishing the core type system which is an extension over type systems that you typically see in protocol compilers like the ipod schema compiler or protocol buffers it is an it has a few more types because it is meant to be complete in the sense that it can really describe any protocol that exists or that you might want to write in the future so it is able to co-generate client servers and coders and decoders for anything you define in the type system and in particular it supports defining services and methods and it is completely type safe so it can generate services that you define so services you can think of it as very much like in protocol buffer compiler it can generate go code for clients and servers and the generated code is completely static so no reflection and is zero allocation for the most part and soon it will be entirely hopefully zero allocation so very very performant code this is what you so everything is very modular so the networking stacks that are generated for instance for services can be you can plug in different backends currently we have a back end which uses duck json over htp because this was what we needed for the first client of this project which is the delegated routing protocol for ipfs and hydra but if you scroll you know when later on when you have time if you scroll down the milestones you will see some of the future scope of the project so there will be lots of features that people find necessary so transformations between different protocols which is a generalization of the familiar ipod schema representations as well as features that are expected to be needed in the filecoin actors space like passing lambdas across network boundaries so an example application is one blockchain wants to give a callback so a smart contract wants to provide a callback to a client and we aim to be able to sort of describe lambdas over a different sort of chain so different chains might describe lambdas in a different way and all of this should work quite seamlessly in the language in a uniform way and you can sort of read about the big picture as you kind of like go through this roadmap document I will briefly show you the type system that we currently have and just kind of highlight how it's different from ipod schema or protocol compiler schema so you will find the standard types that you you generally expect so primitives as well as classes like any or or the nothing type so composite types like links links is of course like a special protocol lab type that doesn't exist in other type systems so these are content links but least map structures there are some new additions which are necessary for sort of forward-looking features so we have a singleton type an inductive type and a union type which are I'm not going to go into details now you can read in the documentation but these are very very powerful types some of them is inspired from the modern type systems coming from languages like julian rust and of course there are functional types service types and methods so the documents that are linked in describe how to use them what their semantics are how they are represented on the wire in the repo you can find a full example of how to define a service this example is defining like sort of an early version of the delegated routing protocol so it's a real example slightly simplified so it's digestible at the moment the compiler doesn't have syntax parsing so you would have to define your schemas using basically by defining the AST of your type definitions in go soon we're going to have syntax parsing this this is sort of not essential it's just a matter of time and so this is just roughly what it looks like to define a complete service and a little bit of go code that basically tells the compiler to generate go code for this service and you can also find in the repo when you run these codes you can you you end up with the generated code which you can also view in the repo it's it's pretty large mostly because it's fully static and quite performant that's it for me and i'm going to just send you a link in the chat in case you want to read more and feel free to reach out if you want to use it you have questions there is a channel in discord in the ipfs sort of realm called adult vice protocol compiler so you can ask questions there or submit issues in the github repo cool thank you all right uh next step is me i'm gonna try and go pretty quickly because i would like to see martin's demo more than i want to see my demo uh all right so let's start uh yeah here's what i did uh i loaded a file over an ipfs gateway right cool everyone we can do this um so that's good uh but what makes this different than other loading files over gateways uh for those of you who are not fluent in multi base and and uh multi formats uh this guy over here indicates that we are using a format called b encode or bencode uh and what we're doing is loading a bit torn info hash so this is the way in which bit torrent refers to files uh and we are able to load it over a gateway um and uh in order to do this so we're sort of doing doing the ipld thing ipld data model for interoperable protocols we would like to have a single way of describing how it is that you build different how you work with different hash link data structures so you can sort of work with them together bit torn is a hash link data structure so we should be able to do that um this is what an info hash looks like again we had support for the codec here it is interpreted as dag json it's got you know a length a piece length they use spaces god knows why they use spaces uh and then this which is a set of hashes a set of shawan hashes for various pieces which are the little file chunks that make up our koala friend here um in order to do this so we have the codec which is this part which goes in here but then we also have to say hey this isn't a unixfs file this is a bit torrent file uh and we did that by passing in a selector along with our parameter here an ipld selector which you can read more about on the ipld website uh we can use our handy tooling to see what the selector looks like and here's the selector represented as dag json it just says i would like you to please interpret this thing as a bit torrent file and then match and grab it for me so that's what we did um mostly just started by looking at the bit torrent spec here's their encoding format it's that simple here's their data format it's also pretty simple just go and then go implement it in in some code the encoder and decoder just use existing b encoding libraries and the fact that they all kind of turn it to json maps uh and then just reflect and turn it into ipld things the uh adl which is the logic that lets me say this is a take this graph and represent it as a file please uh has some boilerplate but but mostly is just sit there and follow the algorithm how many pieces are there how big are they let me take the keys and break them up into pieces then load them for me and that's it um all right a couple things so what do we have we have an adl plugin in go ipfs that allows us to plug in adl like any adls right now we can only do codecs implemented a b encode codec an adl for bit torrent files and this is sort of interesting a patch for the gateway that will allow for rendering any ipld node that presents as a file um that code it's basically i run run the selector and if the thing i get out at the end of the day is bytes you know what bytes seems like a file let's just render it as a file um maybe slightly controversial to do it the way that was done but get this moving all right cool so how do you how do we merge this thing how do you make this thing actually usable uh we need another magic csv file in addition to the ones we already have to track the names of new adls uh like this one so people know what they are when they want to go implement them uh there is we need selectors to be implemented in gateways what i did is sort of a uh a first stab at this but um probably two releases from now and go up with us we'll have more of this we need to plumb custom ipld link systems through everywhere instead of using the default one so we can handle adls and uh people need to make sure i i did the thing right i probably read the bit torrent spec right but there's bit torrent v1 and v2 and i may have missed an edge case so reviews welcome well what do we need to make this like great like this seems okay i can load any bit torrent file means i can also make a bit torrent client that serves data over both bit swap and the bit torrent transfer protocol and basically makes the data available to both networks and then you can make a little bit torrent client that makes data pullable over i kick us gateways um but in order to make this really great we wouldn't like to be able to handle transferring large blocks this is most bit torrent clients are the same you know respect the same sort of boundaries that we do but if they send blocks that are too big we won't we won't be able to handle them there's a proposal for that also it would be nice if we could use like wasm to describe the codecs and adls so we didn't have to re-implement these in every language uh i i don't know who's going to go implement this thing in javascript um but probably won't be me uh so maybe just write it once and we can run it in more places uh there's actually a uh bencode codec in wasm that i wrote that still works there's an adl that is mostly there but the panda the koala render is a little funny at the moment so uh i started programming rust on sunday so anyone who knows any rust uh help appreciate it and thank you more demos later cool next up is me there we go so um i'm going to present what i've been hacking on for the last couple of weeks um it's called web transport web transport is a new protocol developed by the itf in the w3c um conceptually it's basically like web sockets but over quick so it gives you all the nice uh all the nice advantages uh the quick has stream multiplexing at the transport layer so the head of line blocking it gives you a faster handshake because it gives you better hole punching success rates it gives you an advanced loss recovery and congestion control and all the reasons why we love why we love quick but this is not the reason i'm i'm excited about web transport actually the reason i'm excited about web transport is because it allows us to do things that web sockets didn't allow us to do and the reason for this is that the browser handles um web transport different from web sockets uh in web sockets the browser always wanted to see um a real tls certificate so a tls certificate that was signed by um by a certificate authority for example let's encrypt and this also works in web transport but there's another option it's called the server certificate hashes uh and basically what it does it's uh you can you can tell the browser um accept a certificate that has a certain hash and then the browser performs the tls handshake look at the certificate and if the hash of the certificate matches it will accept that one um so this is great for for lippie2p because we were never able to to get uh tls certificates for all of our for all of our lippie2p nodes um but shipping around the hash that's totally something we can do so i started uh programming um i have this web transport go library now um still a work in progress uh it builds on our quick stack uh on quick go um and it can do basic things now and i'm going to show you now um so let's um let's just start up the server and one thing to keep in mind here i will now go to um go to example.com which is mapped to localhost the reason is um that chrome for some reason refuses to do quick connections to localhost so you have to map it internally to some kind of domain name and then you can establish a quick connection um so we are just going to use the web developer tools and first we'll try to um establish a connect on this is very small and let me yes okay um um try to establish a web transport connection to example.com slash web transport which is the server here at localhost so what happens um we get a quick protocol error because the the certificate is unknown now we can um this is because we we just used the url we didn't pass in the hash uh anywhere um so now let's use the hash of the certificate here um so now we're telling the browser this this hash is okay and um pass in the server certificate hashes uh option um tell it that the algorithm is a char256 and the value is uh what we just entered here by the way i just wish that somebody came up with a solution to have a self-describing hash function somebody should really do that so now we have a uh we have a web transport connection uh it works um let's go one step further and actually use that connection so we can open a stream on this connection and then um send something let's say we send a hello world and there we go so this works now um this is pretty cool i'm pretty excited about this um if you think that this could be useful for your project uh that you would want to use this please please get in touch i'm happy to go if you want yep yep okay so i'm going to start from uh going way back in december when i talked about uh optimistic provide uh which is addressing a problem that there is in uh ipfs that the the dhc provide process is very slow so you can see here the delay that it take the the latency that we have that we see when we try to provide something so simplistically this means when we someone wants to publish something to the dhc uh this takes quite a long time because we need to find the right peers the peers that are going to be more most easily uh findable later on uh so what we also find out found out through some measurements is that okay it takes so long to go and find the appropriate peers and finalize the process but at the same time we figured out that actually if you go back and check after all this time the 10 or 50 or 100 seconds that it took um you know to find those 20 which is the right number uh peers that the provider record should go to we actually have found those within the first second and then the process was just hanging and trying to find even uh even closer peers so um yeah and this is the figure that shows that it's mostly less than 0.5 seconds but it could go up to a second or a second or two so we thought okay there must be a way to try and find those peers uh and finalize the process much faster uh so that's why it's called uh the well some of the proposed solutions are called optimistic provide and there were two approaches that were proposed one is called uh the estimator-based approach and i'm not going to go into that today the other one was the double query approach um so what we do there is that uh instead of going and finding um you know asking around which is the closest peer to the one that i want to uh to find according to the CID of the content i want to publish in the network um and trying to get as close as close as possible um let's try to ask two guys at the same time uh and that was actually the third idea and the thinking behind there behind that was that you know when we independently ask two completely relevant peers um you know what is the closer peer that you know or for the CID when we start getting common answers from peers that are coming back to us then we'll probably have you know uh the those nodes that are common between both queries are probably uh the ones that are getting that are the closest ones because you know we are converging to that hash space so we went on and the update here is that we implemented that and we have some very initial results which i wanted to share with you today and it seems that unfortunately the results are not very accurate so it's not necessarily a positive result that i'm presenting here today it's just a progress update and what we plan to do next so if you go and check what happens when you go and ask peers uh one by one like the single query approach you see that you go um in this graph the x-axis is the normalized x-hole distance which very simplistically means what is the distance between where i want to reach and um what i have actually which peers i have actually found to store the record and this obviously is very small is like 0.01 so it means that this process is very very slow but very efficient at the same time because it goes as close as possible now going and checking the double query approach we see that it's not bringing very accurate results so that 0.0 something 0.02 that we've we're seeing with a single query approach now it can go up to 10 and 20 and 30 percent so it's it's still it's completing faster and we can see that more than 60 percent of peers are finding indeed you know those final peers very quickly but these does not these does not continue for the entire process for all of the provide operations that we have tested with so what does this lead us to think it leads us to think that we need to perhaps play around with this uh and have perhaps not two queries but maybe even do three queries and try to see what is the intersection of answers that we're getting and if you know if we can converge faster when three separate queries bring us back the correct results it means that we need to also investigate other approaches such as the estimator approach that yeah I didn't talk about it today but I'm going to talk about it in a future session yeah and also ideas that you might have so if you know if you can think of something that you know an estimation approach which is going to give back results much faster we're more than happy to receive feedback and more ideas you can find here I can share those slides there's the proposal the discussion and the project page where we post updates and everything so yeah that's me thank you very much and yeah thanks for listening and Yanis your screen was frozen the entire time so we didn't see the graphs could you really yeah so you should have said I'm sorry people said in the chat but I guess like nobody wanted to interrupt oh could you say could you send yeah the links to the graphs and and links to them so sorry that's fine and and the implementation links just so we can look at the methodology yeah yeah absolutely absolutely um what is the place the best place to send that nowadays um well I'm even putting the chat I guess perhaps um yeah IP steward discord also yeah exactly exactly I'll post it there I'll post it there