 This is Willis from consensus lab. Today, I'll be showing you our uptime checker. In short, it is a live-ness registry that's deployed on FEM. So the future question for me will be wondering is, what is uptime checker and what does it do actually? So to answer that question, imagine we have a bunch of like nodes that's running in that decentralized network. For now, let's just say in that case of Saturn and we have like a global distributed CDN and these are the CDN that's running in the network. Imagine a user wants to fetch some information from nodes, you just reach ping one of the nodes. But then what if one of the nodes is down? How does a user know whether a node is up or down? And what's the, for example, what's the faster node in the network? That's near me. So in this case, this is exactly what uptime checker is for. So it tells users which nodes are active or live and what is the latency of a particular node. At the same time, at the metrics above actually up to date. So these are the core questions that the uptime checker is trying to answer. We have, in this case, we have member, what we call member nodes. So they are just a node running a certain common application or certain protocol across the network. And of course then we will have what we call checkers. So they are just constantly going through a list of members to be checked periodically, just ping them to see if they are up and down and to see the network latency for these nodes. At the same time, once you gather the ping information, perform like, for example, liveness before and show the last check time to show the recency and also to show the latency of that node in the network. And finally, the checkers also cross check each other. That means if one of the checkers is down and then that checker can potentially be removed from the list of checkers, so we know all the checkers are actually alive. So in this case, checkers check members and also you check themselves. In terms of system architecture, we have the uptime actor. So here this uptime actor, which is Rust-implemented compact bosom, it's running on the deployed on FEM. He does the registry like CRUD of the checkers and members. At the same time, he also tracks the reported checker stat that basically the checker stat reported to be down. Also, we have the member nodes. These are the nodes that are participating in a certain protocol and we are actually using the ping to ping to do the ping. And finally, we have the checker. This one is actually Go-implemented and it's local space. What they do is they expose sound endpoints and that others can collaborate to get the result of the member node information. At the same time, they also cross-check each other. So as you see here, and once a sound checker is reported down, they will, the other checkers will start to report to uptime actor. And if a code from all like a two-third of the checkers reported a particular checker to be down, then that checker would be removed, automatically removed from the uptime actor. This is the high-level or overview of the system architecture in terms of the function. So later for the demo, the architecture is a simplified version where we have four checkers and we have two nodes. So they are all load those based and we only have one miner though. So it's not drawn just to, for the sake of simplicity. So this two member node form local network and they will also have one uptime actor that's deployed within this local network. Now for the do, I'll show you the demo. Now let's see our code setup. So until the time comes training, I have already set up the nodes plus the checkers. So in order to see the whole end-to-end with the setup, we refer in the next one to the previous video. But for this one, we have node zero and one already running. At the same time, we have four checkers. And from here, you can see node zero is actually running both the miner plus the node itself. And then we also have node one, which is actually just running the node and it's connected to node zero. Okay. What's really interesting is that you check the bunch of checkers. So currently we're at checker zero. So and here is logging and constant logging the list of checkers as kind of registered with the actor. And the checker is actually the actor IDs of those registrations. And one zero zero one should be referred to checker zero and the one zero zero zero zero should be referring to the third one and so on and so forth. So later, what I'm showing you is basically check off the nodes and we should be able to see them keeping the responses. And probably what's interesting is saying is to show you the commands that have spin up. So these are the commands that we use to spin up the checkers. And for example, we can see one zero, one index zero. Basically just tell us which one to use. We have long-term checker and this is the actor address. And this parameter is checker port. So this is where the WP port is. And same time, we also have the node info port. So if you call on a card, it's actually V. So if you call worry this port, you will be able to get the uptime info of the nodes. So in this, look your host. We're running everything the same as the local network. So you see here, we have key is missing the actor ID and is referring to this multi-address. And there will be the up. So this is actually the node zero and the is online is true. So let's just focus on is online, having the rest we are still tuning in here. And this is actually saying, okay, this node is up. And the same time, we also have the second node and it's also up. So let's just call in the another one. It should give us the same result. So yeah, node zero and what both status is up. And this one should, yeah, is also up. So if you call every node four and node three, node four, it should give you the same result. So the time, same time, I'm going to show it. So what, so this ideally is a happy part or something like a happy part where everything's running. Now let's just kill all the person. So in fact, I'm going to just toggle and okay, let's just control it to me. And okay, this is, this node is killed. Ideally, let's see what's the log. Okay, from the log, you can see it just for constant trying to gain those multi-address of the registered second node and node using node dash one and it's just doing error. So in this case, if we just try to call yeah, you can see the status is actually forced. So it's just saying this one, this node, this add a multi-address is not up. So and just check another one. Yeah, it's also telling us it's down. So then now what's more interesting probably just try to kill off any of the checkers because they're constantly frustrating each other. They're just trying to kill off anyone. Okay, Zika, Zika, who knows. Okay, now it's kill off. So, okay, this is my vector ID 1002. So from here, let's just check out this one. Okay, so you see in the log, it's saying actor ID 1002 is down and it just cannot connect to both nodes plus another checker. So after a while, I think how it is doing the routine. After a while and both the rest of them all know like for example, checker number two, checker dash two is also we're seeing this being reported. So after a while, like when the message is resolved or executed, then yeah, you can see here the response gets back. So the list of checkers registered in the actor actually reduced by one because the 1002 is actually down. So that means the system actually working now. So with this, I'll conclude my demo and because of time, can I show you the whole setup? I refer to you in your interest if you're free to check out our repos and our other demo videos for the whole setup. So thank you.