 Welcome everyone to our August PL-Endres All Hands. We're gonna spend our first main chunk of time on working group update, and then we have a deep dive into Thumbnair Dome with Tommy and Ian from the NetOps team. So excited to learn more about that. As a reminder, the Endres working group is part of the protocol labs network where we drive breakthroughs in computing technology to push humanity forward. We're constantly building lots of amazing projects and contributing to these open source ecosystems, especially around IPFS, LAPDF, and Filecoin, but also there's a ton of great updates from some of the other projects we contribute to in this month's All Hands as well. Our mission is to scale and unlock new opportunities for IPFS, Filecoin, and LAPDF. We do that in a number of ways, especially by participating closely with a whole network of developers and contributors across the world, driving breakthroughs in protocol utility and capability, and also just doing our work in the open and excessively so many others can learn from the work we're doing and can collaborate with us effectively. These are the different working groups that make up the PL Endres working group, I guess teams in this working group, and all of them are growing and hiring. So if you want to come and join us, we are hiring for a number of open roles, especially engineering managers, TPMs, product managers, and infra engineers. So I mean tons of others as well, but please come and join us if all of the projects we're working on sound cool and interesting and you'd like to help make them a reality. Our strategy for this year stays the same, focus first and foremost on helping ramp amazing new developers and contributors into these ecosystems and doing our work in highly collaborative ways, focusing on robust storage and retrieval across IPFS and Filecoin, helping people use these decentralized storage networks and access their data in highly retrievable, useful ways. And then we have a number of really exciting breakthroughs that are in the works across a number of different groups. We have some exciting PISSNETs coming up in Q4 and a lot of other projects to be shared. We'll talk more about that in a second. And of course, what underpins all of the work we do is actually helping these open source distributed networks operate and run smoothly and making sure that new releases are being put out for some of the libraries we maintain and that as we kind of improve these tools, we reduce operational debt, tech debt, and make them better for everyone to utilize. Here are our objectives for Q3. We're doing a really great job on helping more people contribute to this space, which is great. We've grown ourselves as a working group and we're also helping many other teams grow, collaborate, harness each other's output. Our goal around delivering robust accessible storage and retrieval. We're kind of orange on this. We are not currently on track to hit our five petabytes of Filplus data being onboard into the network per day goal. We're, I think we're at like 1.2 or something gigabytes per day. And while the retrieval market's work is doing a ton of retrievals, they're mostly mirrored traffic. And so we really need to get that out into the open and have that be kind of like real traffic that is a hitting storage providers and actually making sure people are using all of the data that's stored in Filecoin. Our goal around breakthroughs has gone from orange to yellow, which is really good. We now have a live FEM testnet that people are running nodes on. I think there's plans to have that be the, have that EVM actor deployed to that shortly if it isn't already, which is super exciting. So go check out Wallaby if you're interested in the Filecoin virtual machine. And then there's an exciting launch that will be spotlighted shortly around time walk encryption, which is one of the exciting breakthroughs that we've been pushing on. There's a ton more that could be pushed on and being real world adoption this quarter or next quarter. And so kind of additional work to do here to get this fully green, but it's on a good trajectory, which is really nice. And then of course doing a really great job. NetOps team has made some really good improvements to gateway retrieval speeds, which is fantastic and good uptimes and good mitigation of any proactive kind of issue identification and solving, going into making this, keeping these networks running super smoothly and some really awesome new protocol implementations especially in the IPFS land coming out of the IPFS thing. So all of this going pretty well, some areas to continue working on, but making progress. And I'll hand it off to Adeen for IPFS. Yeah, so IPFS, we're trying to make the web work prepared here, doing it with content addressing. Okay, so some of the indicators that we're tracking in terms of performance and number of nodes, things are sort of continuing as they were, nodes are growing, speeds are saying, are saying similar. We've been tracking the ongoing amount of PRs as well. We've added a new metric for tracking the spec PRs separately from the code ones. Code ones have grown a little bit. Some of this is a large chunk of the extra PRs are a function of unified CI, which you'll be hearing a little bit more about later this call. And also just sort of catching up on some triage that you staff and posts the IPFS thing event and some of the things coming from that. So what's been going on? There is a Kubo 0.15 release candidate. The big items in there are support for Blake 3, which has been pushed by Claudia. It was a former entrance folks. So this has been really great to see. There have been some plugins added by Gus for configuring Kubo, sort of however you need to. So you can really get into the guts if you want to without having to go on for the code base. And we have some good stuff to separate out the bits all client and server pieces to make it easier to iterate on them going forward as we see sort of more use and more implementations. There's been a lot of work going on coming out of the IPFS thing event in terms of projects like IPVM and the WNFS working group and things like that that we've been engaging in. And we're sort of excited to see how the community is propelling this forwards and how we can bevious. We have some work going on with Reframe to try and make it more usable and accessible to folks as well as to integrate it with the IPFS.io gateway. So we don't have to go through the Hydras in order to hit the indexers related to sort of the Kubo renaming stuff. We are separating up the Kubo RPC client written in JavaScript from the rest of the JavaScript stack. And Steve will be talking about some events that are going to be happening planned for Q4, except for the, yeah. Over to the libpdp team. Hey folks, I'm Marco from the libpdp team. Libpdp is the networking library for peer-to-pea applications and web three. Some highlights from last time we've had our last community call this week had over 11 folks join it. We're planning a libpdp day during our PLN lab week across implementations. We have a doc about DOS mitigation on the libpdp docs page, continuing work on WebRTC. And for now each implementation specifically, we released a new version of go libpdp, which is makes it a monorepo. So we've merged go libpd core into the go libpdp repo. Default keys are now ED25519 and go libpd does cross version test now via test ground. So we're also releasing a new version, which simplifies network behavior events. And we have progress here with NIM with auto NAT server protocol and continuing work on discovery interface and rendezvous. That's it for libpdp. Awesome. I think we have an update, a video update from Rod for IPLD, which is the kind of content addressing data model of web three. IPLD highlights, we have a few for you today. We've got a go IPLD prime release 018-0. This is mainly bind node focused with some custom type converters and a registrator of just boilerplate. If you're using bind node at all, grab this great updates. In JavaScript, there's an IPLD schema package that is a combination of some existing packages that have been updated and supercharged or IPLD schema parsing and printing. And it'll make these object validators and converters for your schemas. So you can take a schema and an object and you can validate and convert according to that and then do the reverse as well. Heads up, there was a go car security advisory last month. If you're consuming cars from untrusted sources then you should update your dependency. There is some changes happening in the DAGPB spec to note some obscure edge cases with regard to unsorted links. Head there if you're interested in the weeds. Coming soon, there are discussions underway around a possible CID v2. There is a developing spec for IPLD and gateways happening over in the specs repo of IPFS. IPLD and Wasm, we have a tracking issue for that, noting all the activity that is happening in Wasmland as it relates to IPLD, we'll try and collect it there. There will be a new release of JS multi-formats soon that is introducing a CID interface for TypeScript rather than just the concrete type. And there's some ongoing work with regard to selectors and traversals, specifically for go car, but likely also with regard to just the traversals for common cases. And a reminder there is a sync every two weeks for anyone interested in IPLD, join via Zoom or live on YouTube, link in GitHub. Then that's it. Awesome, thank you so much Rod. Over to Peter for IPDX. Cool, yeah, I'll try to go only once. So for developer experience team, we just released unified CI this week. The big thing coming out there is Go 119 upgrade. More on that a bit later in the spotlight section in test confront. There's quite a lot happening. Rust cross version testing is ready and implemented. And we also designed the feature of remote runners and there is link to the design there if you're interested in that. And on GitHub management sites, the videos from IPFS pink are out. So there is a great introductory video to GitHub management out on YouTube. What's next? User defined configing variants are coming out for GitHub management later this month. And in test ground plans, we are preparing to release a Rust and Go cross language testing. And we are going to be prioritizing any stability work around test ground coming forward. As always, we are hosting office hours every Monday. So feel free to drop in. Awesome. And over to Jennifer for Filecoin. Filecoin, we are a crypto powered distributed storage network. That's for implementing most important information. Here are some of our like matrix, the network total capacity, the robot power is still around 16.89 XB bytes and the network gross is slowing down a little bit, but I don't want to worry that too much because we have plenty of space for a lot of good information to be stored. On that note, the data onboarding is not slowing down at all. As you can see the whole trend is going up. We are now at 154 QB bytes useful data on top of the Filecoin network. The data set is still on the screen as you can see, but I do want to call out the daily max in the past month. There's one day we onboarded 1.52 gigabytes of data, which is quite amazing in my opinion. For the Filecoin highlights, we're going to go a little bit different style this time. So here is the high level core improvement like roadmap. We are around Q3. So after shipping FEM in one, we're working on the next network upgrade. It's about programmability, storage actors to do some refactoring on that. Then FEM team is heading down, working out FBEVM and a test that is coming up. I will share more details later. There's a lot of effort on data onboarding preservation and all those things as you can see. In November, we're having an ME17 upgrade and there are two most like heated fit being discussed right now. I'll go ahead take a look. This first one, 536 is about to adding a new multiplier according to the sector duration so that we can have longer commitment storage to the network. We also have 545, which is the couple Filecoin plus from the built-in actor so that more user programmable markets can be built with FBEVM using verified data cap. We are also introducing our very first token contract that we're making data cap a token of Allcoin network, which is super exciting. FBEVM and if you haven't seen the updated timeline, we're targeting a network upgrade for Filecoin around February, 10 to 23. Right now, we have an amazing product and developer experience working group being formed for FBEVM team. They're currently doing it on site in Denmark. We're trying to build a lot of developers to build on top of Filecoin. We do have a test that called WallyBee live already so you can develop Walsum and Solidity contract on top of that network already. During the public Slack channel on Filecoin if you are interested in that, we are also target to launch a incentivized test that to bring more developer into the network in Q4 so that as soon as the FBEVM goes live, we have a lot of use cases can be unlocked on Filecoin already. We will be hearing more from the Batrock team but there are folks out faster onboarding when we're live or retrieval and maybe bringing some IPFS to Filecoin by being able to swap inputs, which is super exciting. And the Lotus team are leading by Magic and collaborating with Start Further Working Group. We are trying to working with City as a Service providers to lock down the design on how to enable Lotus Manager to work with the City as a Service so that more people can join in the Filecoin network and provide storage. If you have any advice, go during the discussion. That's pretty much it. Awesome, lots of great things upcoming. Now for a couple of selected team updates starting with NetOps. Jesse. Good day for the NetOps. First, our first big delivery is job form. The 3 second to 1.3 second is a huge improvement. This is, you can see the chart. The key improve our infrastructure to make sure people get getting their data as fast as possible. It's great improvement and achievement in there. Our IPFS cluster is still getting a lot of ping, a lot of upload. It is a good thing, but we are also looking into any people in the community willing to run IPFS cluster with us. So we will have a very good guidance and end-to-end a step-by-step document for you so we can run the whole network better. The weekly IPFS gateway request, you can see the number is out last week at a little bit. We still try to figure out what's going on there because the number is back up this week again. We suspect it's because the Infarer, the company who also run in the gateway, it is decommissioned, hoping the community will start running the gateway to make our gateway more decentralized. I think this is a good move, but the short-term is kind of a fact that you may be getting more centralized requests into our gateway, or maybe some of the traffic is redirected from this Infarer endpoint, so get into it a little bit. But we see data already backing up again. Also, I think this is a very good opportunity to make sure the community are aware. What we try to do in here is not try to get all the requests go to our gateway. It is helping the community and anyone want to run the gateway, please reach out us. We will help you to help the whole network to run a very good gateway and provide the best service for the whole network. In this situation for the unit, I think that's the gateway user a week. We still have a pretty good number. It's a little bit, at least a week, it's because the traffic is in there, but it will come back next week. And it's pretty much from the KPI point of view for the new ops. Awesome. And Corey for Phil Infra updates. Hello. We've been spending a lot of effort trying to get our own infrastructure on to GitOps. GitOps is a force multiplier for us that helps us do things a lot faster. The update here is that is that bootstrapers and all the critical infrastructure is running using this methodology. So if you have bootstrapped a load of snow lately, you have used our new installations. We are starting to work on more generic cluster that other teams at protocol labs in the PLN can utilize. So included in that is applications such as the IPFS operator. Those are available and you can make use of it using the our GitOps infrastructure. The chain snapshot service has had a soft launch. So if you're interested in that, please go have a look at the landing page, api.chainlove. This is a service that we run that has a lightweight Lotus API. We had no downtime since the last update and 1.7,000 unique users. This is up slightly, but it's roughly the same as the last update. So going steady there. If you are a Lotus user, you might have noticed that there has been some improvements in the homebrew installation. So this is all thanks to Ian Davis, who's with Mycelio. He's done some great work to make sure that the Mac OS universal binaries works. Opportunities, what's coming next? We're continuing to onboard more things onto our weave infrastructure. There's a list there of the projects that are being focused on currently. We are experimenting with a new front door concept. So in the past, if you've had any questions about infrastructure and didn't know where to go, now you can talk to us in Slack. We have a more thorough, more well-thought-out process for handling requests. I'll end with that. If you have a project and you need some place to run it, please let us know. If you would like to help us test the snapshots, we are very interested in making that happen. So reach out to us. Thanks. Awesome, Corey. Is there a specific Slack channel that people can go to for that front door? Right now, just go to NetOps Issues in Filecoin Slack. Cool, NetOps Issues, you heard it here. On to Bedrock Update, Jacob. Hello. Yeah, so Bedrock heavily focused on improving storage onboarding support and really focusing over the next month on reliable retrievals. And so on the storage onboarding side of things, one of the things that we shipped recently in 1.4 RC of Boost is the ability to support calculating Compi for deal storage on the remote, on the ceiling workers. And so this is a big improvement in terms of being able to improve the onboarding rate, removing a bottleneck on the Boost node in the markets process. And so that actually enables storage providers to be able to scale their data onboarding pipeline. So we've already had a couple of SPs who benefited from this by early adoption. So nice wins there. We're also working on rolling out a HB full piece retrieval to storage providers in the Evergreen program to make it much easier for them to download full pieces for replication of all this like slingshot data that's been on the network since launch, try to make that a lot faster. We are on the retrieval side of things. We're working on, we've had some issues on cid.contact with index ingestion because we've been scaling so much data and onboarding so much data to the network that we're working on scaling that up. And so we've rolled out index star which is a network splitter which is enabling us to spread traffic out across multiple index providers. We have six now, six people running indexers on the network, including us which is really, really great to scale that up. And then on the, as Jennifer mentioned earlier so bit swap and boost is one of the big things that we're looking at landing by Phil Lisbin. And what this is gonna enable us to do is have IPFS gateways retrieving directly from storage providers. So this will be a really big win for interoperability on the network. We're targeting having a good chunk of that work done in the next month and doing a lot of testing later on. So hopefully we'll have that all in place by the end of October. Awesome, love the graphs. Jennifer for the new docs working group. For the longest time, we do have a docs team but serving all the Andreas docs team but nobody really know who we are. So we are revamping a new docs team is like kind of reborn. Our goal is to make it easy for all the users and developers to learn build and use file coin IPFS and live P2P and also any other like stack PLUS building. And we obviously want to become very active participants within all the interpreter community we're building. We do have more team member joining in the team. Johnny Matthews, you all know them and we have Danny focus on the P2P team of focus on IPFS. We have John M has been helping with keep the docs done like running efficiently. And we are collaborating with other for that for us and load this TSC very closely on this documentation needs. We are making a couple of changes within our team. So we're dividing our focus into two tracks. The Android side of the docs team, we are going to try to focus on the engineering product. So basically want to create the source of the truth of the tech that we're building by creating technical documentations, user guides and developer guides and tutorial and et cetera. However, we also want to focus on the users. It's like what does user want to know about our text so that they can use and build on top of the tech. We will be working very closely with the auto core and developer relationships. Also feel that TSCs and FVM DX team and to bring that effort up. So we are still like working on our roadmap. Our team is still quite new. But the first priority is like we're documenting everything. That's how we do docs. So basically if you go to our upcoming notion page and you're curious about how to spin up your docs, you can find some resources over there. We want to be user, user, user, user protests. So we're adding a lot of analytics to our docs side. All these pages are already public. Feel free to go explore. And we're considering adding comment section to our documentation. So we know the content we're creating is what our user are looking for so that we can improve our documentation for the more in the future. And then working on FVM developer docs with the FVM team. We're keeping track of the Kubo new release and see if there's any user and documentation needed. We're doing a lit P2P docs implementation audit and concepts audit. And hopefully we'll bump the whole docs later this year. There's more coming where to find us. We do have a PL docs channel that involve Queen Slack if you want to join and hang out with us. Just let me know. Always feedback or issues welcoming the docs side GitHub repositories. And we have public channels for the docs effort in Falcon Slack for all the stack. And I will be sharing notion maybe in our next all hands. Awesome. Thank you for the update. Now over to a couple of spotlights on exciting work shipped and upcoming starting with time lock encryption, Patrick. Hello, everybody. I'm Patrick from the DRAN team. For those of you who don't know DRAN is a network for generating publicly verifiable, unpredictable, unbiased random numbers. We recently implemented a practical implementation of time lock encryption that does not use proof of work. Time lock encryption is the ability to encrypt something now that cannot be decrypted until some point in the future. It's got some cool use cases like a dead man switch, MEV prevention and blockchain systems, anonymous auctions, and many, many more. Concrete things we shipped recently are a time lock library in GoLine and a time lock library in TypeScript. We also developed a demo website for encrypting vulnerability reports called Time Vault. All these were released and discussed at Defcon Las Vegas two weeks ago. Additionally, we've written a notion page about our scheme for people who want to get into the weeds, as Rob said earlier. Additionally, we're releasing a blog post in the next two weeks. So it's going to detail how it's going to work and hopefully get the community involved a bit more. Also, coming up, we're going to make a Rust-compatible library for time lock as well. And yeah, watch out for that. Feel free to join us on our Slack or on our notion page and give us feedback. Thanks. Super cool. Users here need it. So if this solves a scratch as an issue, you're trying to solve. Come get involved in the Slack channel and start actually using this in prime time. Over to Peter for Unified CI. Oh, yeah. Me again. Hi. So yeah, the main thing we did this week on the developer experience side was to release new version of Unified CI, mainly targeted at Go users here. But maybe first, let's start with what Unified CI is. So it's a set of GitHub action workflows. Define how to test and release our code. Currently supported for Go and JS that we distribute to participating repositories. If you want to participate, it's as easy as creating a single PR to protocol.github.repo that has a name of the repo. You're interested in adding back to release. So in the most recent release, the main thing that's new is Go 119 support. And we dropped Go 117 support from our Go workforce, which means that all participating repositories are free to use generics now, which is the most exciting thing. And the other cool thing that's new in Unified CI land is that from now on, whenever we detect a new repository that uses Go in our organizations, we're going to automatically suggest adding Unified CI there. One more thing. If you're updating your repository of Unified CI, it doesn't get merged automatically. It might be for various reasons. Most of the time, it comes down to upgrading some dependencies. But if you're stuck and if you want some help with getting the upgrade through, please reach out to us at IPTX in Filecoin Slack. That's all I had. Thank you. Awesome. Helping everyone merge and check their PRs quickly. Super useful. Makes for a great developer experience. Over to Steve for IPFS gatherings. Yeah. So after a hiatus, it's time to re-energize the IPFS community with IPFS camp. It is happening in Portugal in late October around the whole lab week time. So this is kind of building on a momentum of a series of great IPFS related milestones and announcements that happened in 2022 as we've kind of shifted focus from our reference implementations to having many implementations to the relaunch of the specs process, more developer and user tooling, the partnership to get IPFS into space, major commitments to IPFS funding and IPFS hiring, and of course the successful event back in July with IPFS thing. So there's been a lot there and we want IPFS camp 2022 to carry on with that and really set us up for more growth and adoption here in 2023. So there is a standing website right now. More details will be posted there soon. We look forward to seeing and having many of you present at it. In terms of if you want to, if you're gung-ho for camp and ready to prepare, there are a lot, all the videos from IPFS thing are now live. You can find them on the YouTube channel or if you want links to all the playlists, if you go to the recap blog post that's on the IPFS blog, you'll also find them there. So anyway, looking forward to seeing many of you all here at this event soon. Good times. Woot-woot, definitely watch the videos. They're fantastic. And now on to our deep dive on Thunderdome. Take it away, Tommy. Thanks very much. So yeah, we called the project Thunderdome after the 1985 film. I won't talk too much about the film. I just have to say that the shoulder pads are probably an example of why you shouldn't project current growth rates too far into the future. So we're currently targeting the IPFS gateway use case. So we do this by spinning up so-called targets and firing traffic at them. We make sure it's exactly the same traffic. We gather as much information as we can when we do that. So we get metrics via Prometheus. The newly enabled tracing in Kubo, we scrape all that up if it's there. We grab logs and we're pushing everything to hosted in for us. So we don't have to do any work to keep up with the number of experiments. I said no limits on how many experiments we have. A couple of lines of conflict to define them. Next slide, please. I should say it's me and Ian in the production engineering team that have been working on this in the last couple of weeks. So yeah, the first thing we did was make a tool called Dealgood named after a character in the film who organizes fights. I tried to resist talking about the film, but yeah, they make people fight in a hemisphere in the middle of the desert in a post-apocalyptic future. So make up that what you will. So yeah, multiple targets, same load. Like I said, we run it headless or in it with a terminal UI. So it's useful for local dev, actually. It's also got tracing enabled. We do tracing propagation. So you can say this request and then trace it all the way through from the points of view of the clients as instrumented as the go HTTP lib is. We trace all that and then we can correlate it with Kubo's tracing. Also exports, Prometheus metrics, and we can playback production load from a log stream that we take from the production gateways, which is minimally service impacting because it's just another engine X log file that's being written, or from a randomly from a can list of your eyes. And there's the terminal UI there. It looks delightful as it moves. Ian's got an ASCII cinema demo that he published a week or two ago. Yeah, so that's what it looks like to define an experiment at the moment. We're limited to give us a couple of, or not couple, give us N Docker images and whatever environment variables you want to set with them and we'll run the experiment for you. And we automatically on the left there is all of the tracing stuff. That's not any work we've done. That's just what Grafana Tempo looks like. And that's all the default tracing in Go IPFS that's there now. I think there's a big seam of work in Kubo to instrument more and more things and become more and more useful. One of our first experiments is actually measuring how much enabling the tracing, what fraction of tracing you enable. How much does that impact performance? On the right there is a still from the demo video I'm about to play. And you can see the dashboard is automatically generated. So dashboards for free. And then you've got a one minute, 45 second video to play. Yes, that should start building things. Makes a ECS service for each of the back ends, the targets rather and deal good. So we should start seeing stuff appear here. And yeah, there we do. Peer and demo deal good. Peer and demo without. Peer and demo with. So we should be able to go to our automated dashboard. We go to Peer and demo. It might be a little while. It takes a minute or two for the containers to start. So you can imagine using Firebase at the moment. So it's got a like a sign in network interface in the VPC and that kind of stuff. But very soon we should start seeing some things. Yeah, so there's a little bit coming through there. That's where this reported. And that should refresh every five seconds. I'll full screen this now because there's nothing else to see really. Starting to get some data through already. Times of first bite is kind of the most critical metric in terms of the user perception of the service and stuff. So we kind of centered that in this default dashboard. So we're saying that with is about twice as fast. So when you're peering is about twice as fast in the initial startup here, zero to experiment in end time units. We'll develop this dashboard further with a whole bunch of default metrics and stuff. My wife was saying that she wishes she could fast forward me and mute me in real life. So yeah, we've already got on the backlog a bunch of experiments. We want your experiments. So like what's interesting to you? What would you what battles would you like to create? So yeah, send send us your suggestions. We want to get as many experiments going. The things only as useful as the experiments we run. Mario has asked the question, where does it get the traffic from? It's replaying a trace log. You approve the PR a little while ago, mate. So various levels of soon coming soon, more production like. So at the moment, we're saying the Docker image plus environment variables constitutes at different targets. But of course, other things affect how well the infrastructure runs. What kind of disks are you giving them? What file system you're choosing? Are you raiding it? We want to finesse the UX. So it's an absolute delight. We want to automatically, if you've got performance enhancing branch, just track it, just keep deploying the branch, test for regressions, et cetera. RCs automatically should be tested. I shouldn't read the comments. Mario is making me laugh. Continuous Kubo, JSI, VFS, Ira, shootout. Bring your own hardware. So like we don't like our hardware options, run it yourself, pointed at something you're interested in, run our side cars, get graphs automatically. And then one thing I'm mega mega excited about is the idea of like infrastructure experiments. Which load balancing strategies should we adopt? What kind of machine sizes might we use? Can we compare this infrastructure provider with this other one? What if we use a shared block store? What if all the nodes in a region had a peer store in Redis? Things like things above the level of individual instances of our software that do impact the performance of it. Because Kubo and anything implementing the IPFS gateway spec, actually, are deliberately designed to interact well with load balancers, caches, that kind of thing. So we want to be able to test them as well, because the aggregate of all those things is our performance. So that's it. That's all for me. Thanks for your indulgence. That was a pleasure. Awesome. Then we'll end our meeting there. And everyone have a wonderful rest of your Thursday.