 Yeah, from a very high level, net ops, especially in the back end side, we have a three different area by force file, info, and Sentinel from the high level, the picture is like, at least I will have a mirror doc can share every inches. The team will go through and into into dust or what they are doing and the team member and what we are today. So for here, I just become the one to star is this conversation, but all the how well does it happen is missing the team? What is by first? What does the by first team do? It does a few things, mainly runs IPFS infrastructure at scale. Our biggest project is the IPFS gateway hosted at IPFS.io and the web.link, which allows browsers that speak well and tools that speak HTTP to access content from the IPFS network without having to run their own nodes. It basically provides a canonical way to address IPFS content via HTTP. We also provide the default bootstrap nodes, which are baked into go IPFS and JS IPFS as a public service so that other nodes can find each other in the network. And we also run preload nodes, which augment JS IPFS and expose IPFS end points that are not available in the browser. In practice, that means that JS IPFS clients can add content locally in the browser, then use a preload node to request that CAD, effectively caching the data and allowing the browser to have to be closed and be loaded without losing the data. So why are we doing it? Our motivation is to provide the best performance infrastructure for others to use, and most importantly, for the IPFS gateway, which seems to be the most widely used infrastructure, provide best practices and standards and tools for others who want to run IPFS and IPFS cluster at scale. How are we doing it? The high level of how the gateways run is we have bare metal nodes. They're run on Equinix. We run in seven data centers, and we have between four to 16 nodes. The reason we run on bare metal and net VMs is because this IO is very important, and access times are very important if we want to provide very fast service. So that's why we haven't moved on to VMs yet, where most storage is via network, basically. We're working towards that, too. There is a load balancing layer that's running NGINX, and does all of the HTTP layer balancing and filtering and such, and also a separate IPFS layer upstream from that, which allows us to scale out just the IPFS boxes without having to mess with any load balancing or NGINX issues. On the load balancer layer, we run, we use anycast to route traffic to the two addresses. To the data center that is the fewest hops away from the request origin. So each load balancer node announces a global BGP route for the IPv4 and IPv6 that we run. We use hundreds of metrics, if not probably thousands by this point, to monitor and alert. As far as uptime goes, we have pingdom checks, synthetic checks from the outside of the network. Within our network, we have NGINX metrics, load balancer, error rates, performance, time to first byte, that kind of thing. We also collect go IPFS metrics from within IPFS, things such as go routines, peers, want lists, also time to first byte within IPFS, and also OS level metrics such as IOCPU, generic things. We follow infrastructure as code principles, which means everything that we run is managed in GitHub, and we deploy through CI via Terraform and Ansible playbooks. For progress so far, we've recently hit one billion requests a week on the IPFS.io gateway. It has since gone down a little bit, but it seems to be going back up. So we're hovering around a billion requests, total requests. And we've hit time to first byte of around eight seconds for 95% of our users with a 99.9% uptime. And what we're going for is five seconds. So we will continue to scale and improve our system to ensure that for 95% of our users, it doesn't take longer than five seconds to start receiving content from the IPFS network. All right, fill in for us. So what we do, we operate and monitor core file point network infrastructure, bootstrap nodes, api, chain.love, that's dashboard, disputers, and we're also a core part of running the DevNet, butterfly net, and interop net. We drive operational improvements in tooling and Lotus, and support and enable network developers and operators. Our top goals for 2022, the first one is around api.chain.love, which is a Lotus gateway. This is a service that I'm sure many of you have possibly used. It's the default in Lotus Lite, and Lotus Lite is often used as the introduction to file points. The many new users, they interact with the file point chain through chain.love, and it allows you to interact with it with the chain without running a full Lotus node or a distrort. So we have some very ambitious goals here. We're trying to push it to be able to handle more than 200 requests per second without having latency in the chain thinking. And we're really looking to push Lotus to its limits and develop tooling and improvements in Lotus to reach our goals, because it will be very difficult, I think, with the existing patterns to get to a goal here. We also have a goal to run a Lotus chain backup service that produces backups that are never older than eight hours. Reba has done an amazing job in file point launch of running this, and we're hoping to launch a parallel service so he can sunset his and move on to other things. And we are, in general, trying to reduce the operational overhead for Fill Infra, so we can focus on high impact projects and not get bogged down in some of the manual toil that we have in the past. So how are we doing it? We are monitoring for high uptime and collecting data for continuous performance tuning. We're automating and improving the deployment of our Lotus Core Infra and reducing manual toil for our team and creating and sharing operational tools and resources. And we're also supporting a lot of devs. We are providing access to storage provider hardware and storage for five teams in our data center, and we're leading a managed GitOps platform rollout, which is what Sid had mentioned, which is WEAP, and WEAP will give the ability for Android teams to have autonomous control over their applications and network deployment capabilities. So we're rolling that out within the next month or so, and we will give you more updates on milestones as we start to work with WEAP. So this is a basic diagram, and it's showing we are running our current core Infra in EKS. So we run Lotus and the Gateway and the stats board there, and that's where we're focusing some of our work around deployments and automation. We collect data in Prometheus, visualize it in Grafana, and then we also monitor that data and page and send alerts. Our progress so far was super pumped about ChainDobble because we're seeing a pretty huge increase in usage. The weekly average is currently at 45 requests per second, but that's a 57% increase from 40 days ago, and we have four lines in uptime for that same duration as well. We have three regions where we actually operate our clusters, so we have nine draft nodes in three regions, so it's easy for anyone to join the Filecoin network. And for the stats board, we have a 99.8% uptime in 2022. We would like to get that to three lines, but we're super close. What's next? We have the Filecoin Chain Snapshot Service. It's in its planning phase, and our first milestone is to provide snapshots in S3, which is the current functionality now. And we are also hoping to push on HA and scalable Lotus because we need to ensure that we can keep Chain.Love up and meeting the demand, and we would really love to actually see this service growth super useful to the whole network. And we have a recent Lotus Build Artifact Improvement Plan. There's a link to it there. You can read more about it, but the general idea is to increase the build success rate. And the usability of Lotus Packages and images that get built as part of the CI pipeline. And that's all. Thank you. Thanks. Karin, I'm with Sentinel. Hey, everyone. I'm Hector. Sentinel is another of these things inside NetOps. And our goal is to guide this access of protocol apps technologies through data monitoring. We're especially focused on everything that happens on the Filecoin chain. That involves doing a bit of everything, writing software and running it in our infrastructure, doing a bunch of data warehousing, but also doing monitoring and analysis of the Filecoin chain and dashboards, other things. Our main objective is that this chain data is complete. It's reliable. That means it corresponds to what is actually on chain. That we're able to query that data really fast as soon as it's produced by the chain. And that we can extend those queries for the whole length of the chain, which is when it starts becoming a large amount of data. Of course, it's not only for us internally, it's also for the community to build upon this. That means that we need to make not only the software, but also the data that we extract available for reuse for the community so that they can run their own analysis. And we have to keep all of this running while Filecoin keeps taking great steps and making progress at great pace. This is a very simplified diagram of the Filecoin data extraction pipeline. We have Lili, which I will speak a little bit more in a moment, which is the application that extracts the data from the chain. We push that into a database, in this case timescale. And we have an additional data pipeline, which is essentially storing the whole archive of data also on S3 packets and making it available through Asina, etc. So it's very simple here, but there's a little bit more complexity when you look inside those boxes. One of the main applications that we write and maintain is Lili. Lili is a wrapped Lotus node that watches the chain. And on every epoch, every 30 seconds when the new tips have happened, extracts everything that happens in it. Messages, chain economics, sectors that have been committed, deals, etc., etc. Everything extracted as a structured data into a database. The idea is that we have this running and following the chain perpetually, but also that we're able to use this to also reprocess the chain or extract data from previous moments in the chain. That happens when you were not running during a certain time or when you introduced a back or when someone wants to do something else with the data or when someone wants to process a chain which is not mainnet, etc., etc. This is the diagram of the architecture. I think the thing that we're aiming to move to when we want to escape horizontally and so on. Now it is similar, but things are more contained on a single application, a single demo. The main thing that worries us for Lili is that with FBM and with the growth of the Falcon chain there will be way more things happening on chain that will mean way more data to extract. And that extraction needs always to happen within these 30 seconds that every epoch takes because otherwise you're going to be falling behind. Therefore, we need to find ways to parallelize extraction as much as possible so that we can indefinitely scale and being able always to have this fresh data for analysis. We've done lots of progress. We're at a very stable moment now. We're able to process the chain in time. All the data goes into the dashboards, into the database. All the data goes into the archive. And our database and our archives are available to partners that we support and that build their own applications. For example, Starport has made really nice public stats or public panels with graphs about Filecoin for the community. They make Twitter threats about Filecoin data and so on. And this is all powered by the work that we do in Santina. And Steph is going to talk about this one. Hi, I'm Steph. I'm part of the Santino theme. And I'm mostly focused on the data ingestion and data analysis side. We want to provide data to everybody in the PL Network so that you can make more informed and smarter decisions in your everyday work. And our goal is to build a data platform that enables anyone in the PL Network to perform their own analysis so that you wouldn't have to, let's say, come to me and ask for a query. Ideally, you would be able to do that yourself. So that's kind of what we're working towards. How are we currently doing it? We're creating a source of truth with pipeline that gathers data from different varied sources. So it's not actually only trained data, but also data from our Airtable CRM, Twitter mentions, stuff from elastic search for meteors and event folk, and we also enable the warehouses for the interest group and also BI systems so that people can use their self-service tool of choice, whether that's periscope or looker or observable. Yeah, so currently there's already a lot of dashboards available in periscope and data warehouses such as StartChiff are already queryable given the correct credentials. If you want access to it, you can just do me and you can set something up for you. What's next? We need to automate a lot of existing manual processes. So for example, our archiving operation is actually done, it's triggered by the user. So somebody interfaces into an EC2 instance and then runs a bunch of commands and then the archiving process is performed. But ideally this should be in a pipeline that's triggered with a cron job instead of somebody, instead of a person having to do it. We would like straight or well to find self-service data on that so that hopefully by this year people can ask questions to our data warehouse and get answers directly. Awesome. Thanks a lot, NetOps for the deep dive. Thanks all for everyone who presented and for tuning in this week. We'll be back again in four weeks. Hope everyone has a wonderful rest of the day.