 Today's demo from my side will be really fast because I just want to announce that we have a first draft of the Heraclitus consensus spec. As you may know from previous demo days, we've finished with the first MVP of the protocol. We kind of know now what are the details and the basic end-to-end implementation of the protocol and we are working on having an alternative implementation in Rust leveraging the forest client. So we are exploring that line of work because right now everything is so HC is implemented in a fork of Lotus in Eudico and we're exploring having an alternative implementation and that's why we started this line of work of having a first draft spec. So if you go here, you'll see the description of the architecture, how all of the sub-protocols work like the checkpointing protocol, how cross-net transactions are propagated and all of the low-level models as well as some future work, like for instance, the detectable misbehavior or how collateral and slashing work. The idea is to have soon a draft of a fit that we can start discussing with the community but in the meantime, if anyone wants to read the spec, start using the MVP implementation or even like give us feedback, you see something that it doesn't make sense at all. Feel free to open a PR here in the protocol consensus lab. I don't know if to share it here in Zoom or maybe in Slack, but yeah, so feel free to give us feedback right now these just suffered a first internal review by the members of consensus lab, but we are starting to spread this so that anyone can give us feedback. And I guess that's all for my site. Thank you very much. Okay, so this is the IPFS operator. The IPFS operator is a Kubernetes operator that allows you to quickly and easily start up IPFS cluster. We made, I don't know, there's probably times, there's probably a lot of Kubernetes operator, Kubernetes people out there who would like to use more web three stack and we up to this point don't support it very well. So hopefully this is a step toward increasing that support. I want to start off by just showing you how it's used. There we go. I just want to show you how to create a cluster. I'm going to go back to this, but I want to have a really good, I'll circle back to this and you'll see what happened, but this is the cluster creation process, kubectl apply dash f and we just apply, give it a file. And boom, has creating an IPFS cluster ever been easier than this? It's basically you're done. Actually, the operator is still working in the background, but I want to show you what's going on inside here. This is it. Basically, we took out all the critical components, the critical decisions that you might want to make, namely the storage and how many replicas you want to have. In this case, we're just, each node is only going to have 50 gigs, pretty small. And we're going to set up a hundred replicas. So we'll check back on this later and I just want to get that started for now. Yeah, this is the page that basically shows that. And looking at the collab cluster is indeed a breeze. So what this is doing, we're going to create a whole bunch of pods. They're going to, oh yeah, collab clusters. Are you guys, if you guys are familiar with, there's a IPFS cluster has a feature called collabs. Collab.cluster, ipfscluster.io. These are, this is a feature of IPFS cluster allows you to follow the pins that are going on in another cluster. Setting up collab clusters is also very easy. We can create an IPFS cluster simply by specifying who you want to follow. This will, it's the same process as creating a regular cluster just as you saw, but in addition to allowing you to use it for your own purposes, it will also pin important content from around the world that we're, in this example, we'd be pinning the Filecoin proof params, we'll be pinning the IPFS websites, the Gutenberg content, Pacman packages. Quick, quick, easy. Yeah, let's see. Yeah, scaling a cluster. Yeah, IPFS, these are, I have two clusters now, one that I created earlier and one that I created just a couple of minutes ago. Let's see, how do we scale a cluster? So all Kubernetes objects are typically displayed in YAML. In this case, this looks kind of similar to the setup that we just saw. I want to change my, let's say replicas. I don't want four nodes, that's too small. Let's go six and just save that, done. No provisioning hardware, no messing around with cloud providers, no cabling, no configuration, just edit the config file, finished. Just to prove that this works, get pods, there we go. We've got our cluster with 100 nodes still being built and now we have our six nodes. Looks like the last one's just finishing up right now but we're up to six nodes for that cloud cluster already nice, quick. Yeah, okay, so now let's back up a little bit. What is an IPFS operator? I wanted to start off with that so that we can give a little bit of time for the 100 node cluster to build while we go on with the rest. What is an IPFS operator? Kubernetes, excuse me, a Kubernetes operator. Kubernetes allows you to extend the API by adding custom resources and basically whenever you do this, you have an operator. In this case, we have this object called an IPFS cluster. Yeah, we have this object, this is kind IPFS. That's called a CRD, custom resource definition. You pass it parameters and you can have some custom code running that will act on those parameters to create things. I put some examples here. These are some great Kubernetes operators that exist, the Postgres operator. If you change the size of it, it knows how to handle wall backups. You don't need to be a DBA to run Postgres if you use the operator, Prometheus, same way. If you deal with Prometheus monitoring, you can edit Prometheus targets on the fly without going in and adding any of the config files. And of course, this one, this one that we're discussing right now. All the minutia to do with setting up an IPFS cluster, you probably noticed that I didn't do any of it, right? A little bit more deeper into the weeds what, how this actually works. It's basically, it's an IPFS node with IPFS cluster installed next to it. This is the standard IPFS cluster that you know and love. Additionally, the way the followership works, there's one additional pod for every cluster that you're following. So if you're following 10 different clusters, good work, there'll be 10 additional pods that are all here. Basically they're connecting to this IPFS node and making requests to IPFS to store content. The way this looks at a whole cluster, and this is actually the reason why an operator is necessary is that to set up the entire cluster, it is, whoa, it's very complicated. But you can see here, we have a number of IPFS nodes. They're sitting here behind a load balancer that enables you to interact with the nodes. Internally, these IPFS clusters are doing a lot of complicated stuff. They all have peer IDs. They have a consensus protocol which might be RAFT or CRDT. They're talking to each other, doing membership joins and all this. Your application is this yellow box sitting over here. This is the experience that you want. You wanna be able to just say, I want you to add this file, something happens. And then over here on the left, that's you. I can just say, I want to read this file over IPFS and it should just work like magic. Yeah, so I wanna show an example of this. Let's do that. Okay, I have my handy-dandy nodes right here. So let me just copy that. Hello from Kate. Let's go ahead and use IPFS cluster add to add that to the cluster, not with a capital letter, but lowercase. There we go. So this adds it to the cluster. This is indeed adding it to the cluster that is running in Kubernetes. I set up a port forward. But what you can see is that we get an ID back and we can fetch this over IPFS or even through the gateway. Yeah, there we go. It's visible over IPFS. And indeed you can do this with any of them. Echo, random, let's put this IPFS cluster add more files. We get a different CID. Let's see what we got IPFS, DHT, find probes. Let's find this file that we just created. This is gonna be demo hell. Let me use the other one, just in case of your problem. Okay, so the file that we added a little bit earlier, we gotta give it some time to propagate, but the file that we added a little bit earlier, these are IPFS nodes that are running in our cluster. And indeed if we were to exec into one of these, there we go, IPFS ID, what's my ID? This is this F3, it looks like that's this one right here. So we are indeed storing these files on the cluster that we created and fetching them anyway. This is the observation that I wanna really like hit home here is that you can add files and retrieve it off of IPFS. Doing that is easy, all the complicated parts, generating the peer IDs, generating the cluster secret. I bet you didn't even know that you had to do that just from looking at this. Yeah, wow, complicated. And yes, this is the overall point for putting this out here. We wanna use things like this, use operators to lower the barrier to entry. There are probably people out here who use Kubernetes in their day-to-day life for their business and they maybe read about Web3 stack and they wanna try it and then they're like, oh, this isn't for me. But if there is an operator out there that is easy to pick up, maybe that puts them in a position where they're actually able to try it. Yeah, compatibility layer level, this is a sort of a matrix to show like where different types of operators are. I would honestly say we're still in the very developing phase of this. So we're probably somewhere between level one and level two. We can do the basic install and we can handle some light changes. Like you can see that I can increase the scale of it and I can do some on the fly changes. As some of these deep insights and stuff like that are to be or to come later, I guess, still working on it, still work in progress. Yeah, I'd put us right around level one, level two, something like that. And to top this off, where can you find it? This is being developed in conjunction with Red Hat Emerging Technologies. So its current home is right there, Red Hat ET IPFS operator. Once we get this into more of a production state, it will be on operator hub. Red Hat is planning on making sure that this is available on OpenShift. And of course, we will include it and make it as widely available as possible as it grows in maturity. And I think that is it for me, hopefully. Oh, one sec, I nearly forgot. I wanted to see, yeah, how many nodes that we could create. I'm going for 100. Looks like we created, looks like 41 is the highest ones. I did do a little bit of time, some timing on this before. It takes right around, I wouldn't say like 20, 25 minutes to create a full 100 node IPFS cluster. But dare I say that is much faster than creating it manually. So there we go. I think that's it for me, thank you. Yeah, great. So today I want to give an update on the DHT routing table health study that we've been conducting at ProBlab. And so I'm going to start quickly by introducing, because this is a bit technical, so the KADMNA DHT routing table, the way it works. So KADMNA is a distributed hash table, which is basically a decentralized overlay network in which there is no central peer. And each node has to know at least some of the other peers participating in the network just to be connected. And this set of peers is called the routing table. And so in the KADMNA implementation, all of the peers that are in this routing table are sorted, so are sorted in what's called KBACats, which is defined by the core distance between a peer ID and another peer ID. Each bucket is cut at 20 peers. So I'm going to just give a quick example to illustrate better. So for instance, if we take a random peer identified by an 8-bit string, so 0, 1, 1, 0, 1, 0, 0, 0. And I just generated some random other 8-bit strings and I filled in those peers or those bit strings in the KBACAT of the initial peer. And as we can see, so the logic is if two bit string pair of prefix of length X, it's going to be in the bucket X. So for instance, in the bucket zero, all of the peers start with a one, whereas our reference peers start with a zero. And so on. So for instance, in bucket two, the all of the peers share a common prefix of length two. And we can see that when the peers are generated randomly, which is the case for IPFS or IPTP identifiers, we expect to have a lot of peers in the low ID bucket and a way fewer in the higher ID bucket and in an exponential way. So to measure a bit the health of the actual network, we use the NebulaCroller, which will try to, that will crawl the network and provide a snapshot with all of the peers that are online and all the state of the routing table. So all of the peers that are in each node, a routing table. And so for this specific study, the data was taken out of 28 crawls. So from 20 snapshots of the network over one week. And so the methodology we used to, yeah, for this study. So given the global snapshot, we were able to reproduce the K-Bucket for each of the peers simply by computing sort distance between each of the peer in the routing table and the reference peer. And from the global view, let's say we are able to see if some of the node should be included in a K-Bucket, but are actually missing from the K-Bucket we retrieved from the network crawl. So we can see what the theoretical bucket should be and what they actually are. And also for this study, it was a bit hard to compute all of the XOR distances because to see if any of the peer were missing, we need to get the X closest peers to a specific peer ID, which is computationally expensive as the XOR distance is not linear. And so we implemented a binary try in Python to speed things up. So the results we get from this study is the first we want to study what's the ratio of peers that are in someone's routing table but are unreachable from the network. And so basically stale entries in the routing table. So that's the result we get. So for Bucket 0 to 8, the rate is quite low. So the Buckets 0 to Bucket 8 are the bucket that are full that contain 20 peers almost all of the time. And so the rate is very low. It's on average out of the 20 peers that are in the Bucket 0.75 are unreachable. So that's very good given the high train rate that we observed in IPFS. And for the Bucket 9 to 21, we observed a higher rate but it's still very acceptable. And so I think we obtain different results for the low ID bucket and the higher ID bucket because the replacement method is different. Is implemented differently in Google. Now to the next measurement. So now we want to see if the distribution is the distribution of the peers in the key buckets is as we expected. So as the peer idea are expected to be generated randomly over the key space of 256 bits, we expect that buckets. So yeah, there will be a half thing on the candidates that are eligible for each of the buckets. And so we can see that Bucket 0 to 8 are capped at the maximum of 20. And then we can see this experimental growth or decline going down. And so that's exactly the number we expected, which is good. And so when we look at the rate or the number of missing peers for each bucket, we can see that for the full buckets, so yeah, sorry again, the missing peers are if first bucket is not full and second there is a peer in the network that would have fit this bucket, but that is actually not in this bucket that we've been able to observe using the global snapshot. Then the missing peers rate for the full bucket is very low. It's 0.12 out of the 20 peers and it's a bit higher for the higher ID bucket, but again, it is still very, very acceptable. And now getting to the, so one of the key properties of the Academy at DHT is that the node is supposed to be aware are to have the 20 closest peers to itself in its routing table just for routing property. And what we observe is that given the high turn rate again, so we expect to have not 100% obviously because some nodes are entering and leaving the network. So it changed us constantly. We observe surprisingly that 61% of the peers know all of their 20 closest peers. So all of the 20 closest peers are in the KBuffets and 95% of the peers know at least 18 out of the 20 closest peers, which is also excellent. So what we can tell from this study is that the DHT is very healthy maybe more than we could have expected. So it's perfect. So we have a very low rate of sale on trees in the KBuffets, the peer distribution is as expected. Only a few peers are missing from the routing table, which is good again. And a very high rate of no 18 out of the 20 closest peers. So that's very good. So the Pro Lab is doing a lot of RFM. I think at the moment there are 20 RFM that we publish on the GitHub repo protocol slash network measurement. So you can encourage you to go and check them out. We already published one, which is RFM two and this one RFM 19. So there is a report with much more details that is available on the, so the peers and mayors. Yeah, but the report is already accessible. And so I'm gonna give some more details on this at the IPFS thing next week. So if you're around, make sure to attend the measuring IPFS track. And we also measure some things that were odd and may mean that the THD isn't as healthy on some other aspect as we think because the diversity in the low ID pocket is lowering over time. And so it might become a problem because the network may become centralized. And here are some references with some links so I will pull them later on the Google Docs. And yeah, that's it for me.