 On the screen, the title of the talk is Trusted CDNs Without Gatekeepers. So I'll be talking about Web 3, blockchain. No, I'm not. Now, there's not going to be any mention of blockchain. So Trusted CDNs Without Gatekeepers, who am I? My name is Michał Rysiek Woźniak. Over the years, I've been on all the different sides of tech and internet. I've been an activist. I've been on the policy side somewhat. I handle tech support and information security for journalists and other at-risk individuals. And I manage the infrastructure. And now I'm doubling a little bit in tech journalism myself. But for this specific talk, what is probably most important is that I've been a chief information security officer and head of infrastructure at OCCRP. Anybody remembers OCCRP? The Panama Papers says more? Yeah, there we go. So not surprisingly, websites hosted by OCCRP tended to have very little traffic and then suddenly very, very, very much traffic. So that was an interesting thing that got me thinking about how can we make websites resilient? How can we make websites stay up and running without relying on things like Cloudflare? Because, fuck Cloudflare. So I've given a talk two years ago at Hope 2020 called, censorship is no longer interpreted as damage. And what I said in this talk, are you looking at the screens? Is it visible? OK, so because there's going to be information on the slides, but I will be also trying to say it all out loud. So in that talk, I covered the first part, let's say, of making websites resilient, or my idea of making websites resilient. And that was focused on sane website setup. Don't have a WordPress with 50 plugins, because that will make you cry. And this will make your server cry. And this will make your visitors cry. Caching and microcaching, static site generators, and coping with downtime when that happens. So I would implore you to watch that talk at some point. But it's not necessary to have watched that talk to be able to follow this talk. But it might be useful on its own. Some of the things I will repeat a little bit from that talk here, because they are relevant. So anybody here runs a website that every now and then gets a bit of traffic that makes the server go like, OK. So my takeaway from doing this and having dealt with downtime and all of that is that whatever it is, it is almost certainly not a DDoS. You are almost certainly not dealing with a DDoS. Your website is now not because of a DDoS. It is almost certainly down because it's probably just organic traffic that is hitting your dynamic CMS. Is the term dynamic CMS understandable? I assume it is. So it's things like WordPress. Data is in the database somewhere. Then there's a bunch of code. And for every request, roughly, the data gets pulled from the database. HTML gets rendered pushed to the user. That's a lot of work. And this means that you would be amazed how few requests per second the standard WordPress site can handle, unless you're doing some fancy stuff like microcaching. But then, of course, there are the plugins for CMSs. And I'm focusing on WordPress because it's the most popular one, and also it gave me the most grief. But things like over 90 WordPress themes, plugins, backdoor in supply chain attack. Yes, that's what happens because you have a plugin that somebody wrote five years ago and then stopped updating. And there's a bug. Or somebody sold the plugin to some other company that said, hey, we're going to buy and support your plugin. Here's some money. Thank you. And now they have supply chain attack against any website using the plugin. Great. But thankfully, you would not believe there are plugins for fixing compromised WordPress websites. Don't. This is not the solution. Just make regular tested backups. Another way of dealing with, let's say, website downtime mainly when it's related to censorship. Mainly when there are some bad people on the internet somewhere that decide, you know what, people should not see your website. We don't like your website, and nobody should like your website. Nobody should see your website. Is Tor, I2P, these kinds of tools that give people like us access to websites that other people don't want us to have access to. The problem with these tools, these are amazing tools. And I love them fondly, and I use them all the time. The problem with them from the perspective of a person running a popular-ish news website is that it is unreasonable to expect that the population of a whole country switches to Tor browser just to keep seeing your website. That's not going to happen. There's going to be a bunch of people who will do this, and I strongly, strongly implore you to have an onion hidden service for your website. That's always a good idea. Just do it. It's really not that much work. But making sure that your website is available in places where it might be censored is not a Tor-shaped problem. Tor-shaped problem is I want to have access to a website that is censored. I want my website to be accessible to everyone for whom it is censored. It's just the other side of the coin. So yeah, your website visitors will not switch to those tools en masse. And the same goes for Brave. Changing DNS settings, all of this. We've seen situations where changing DNS settings did happen en masse. But that is not something you can really expect and rely on if you're running a website that you want to stay up in places like, I don't know, Azerbaijan. So then the standard response is, well, you can always use Cloudflare. Cloudflare is great. Apparently, about 19% of all websites think that this is a good solution. And just by the virtue of that number, that tells me that this is not a good solution. Centralizing things, giving almost 20% of all websites to be controlled and man in the middle by a single company, I don't think that's a great solution. I'm sure that Cloudflare service is great. But it strikes me as a little bit odd, especially that those companies will drop your site like a hot potato if it's just inconvenient for them to continue hosting your site. And as the CEO of Cloudflare said, right after they took down Daily Stormer, a small number of companies will largely determine what can and cannot be online. And as much as I am on the polar opposite of political spectrum from the Daily Stormer, I also find this disturbing, right? This is too much power for a private company to be able to say, we have almost 20% websites of the whole internet. We can just decide which ones stay up and which go down. That's great, right? So we will have a talk, hands up, who will have a talk with me, boom, these guys. In two hours at 4 PM in envelope, not talk, sorry, workshop, on making sense of social media freedom of speech and fascists. And I do have opinions on the Daily Stormer situation, but this talk is not about that. The workshop will be more about that. So if you want to have this conversation, let's have it there. Today, yeah, 4 PM envelope. But the problem with centralizing, putting all of our eggs in those few, few baskets, like Amazon AWS, Cloudflare, Fastly, Akamai, these guys, right? It's not just the decision that they might make, right? It's not just like, oh, we don't like this website. We're going to take you down, because they're too big an operation to really care about most of those websites. But the bigger problem is also that all of those companies tend to go down on a regular basis. Have you noticed this? Like, so, Fastly Outage, June 22nd, 2021, Akamai major outage due to DNS bugs. It's never, it cannot be DNS, it's never DNS. It was DNS. July 23rd last year. Then Amazon Web Services, third outage in a month, December 22nd last year, and Cloudflare outage June 21st today, right? If you happen to use all four of them, right? And I have seen websites that use all four of them for some inexplicable reason, because it's just easier to, you know, hotlink a piece of JavaScript from Fastly, and you know, you have some of your content on Akamai, and then you have some, your website is behind Cloudflare, and then there's something on Cloudfront because somebody put it there, and what are you, what are you going to do, right? Now you have all of those four companies, all of their infrastructure has to stay up for your website to stay up. And over the last year, more than four times they went down, right? So now you quadrupled your chance that your website will be down, and yeah, there's nothing really you can do once an outage like this hits, right? Because you don't know how long it will be, you don't know how much energy you can invest into this, and because maybe it will be up in five minutes, or you know, three hours, or it's a Facebook-shaped screw up, and you know, how long is it? The eight hours of global outage? That was fascinating. Anyway, the point is people make mistakes. Yes, people make mistakes, there should be a K there. Software has bugs and hardware fails, and monocultures are not resilient, right? If you have plenty of small companies that handle a small part of the internet infrastructure, then any single one of them going down is not that much of a deal, right? A person making a mistake at a small provider somewhere, yes, some users will lose access to the internet or some websites will go down, but it's not 20% of the internet going down because somebody made a mistake or some software had a bug, right? Where you put all your eggs in those few huge, gigantic, enormous, galaxy-sized baskets, then a proper bug, proper correctly placed mistake will take 20% of the internet down, and that's kind of, you know, not really okay, and then who here uses Cloudflare? Yes, so if something goes pear-shaped, really pear-shaped, switching from Cloudflare to get your website off of Cloudflare can take up to 48 hours, right? Because you have to move the NS records, you have to move the name servers somewhere else, and TTL on those is usually 24 hours, and then caching, so now you have to make a decision, do you wait for Cloudflare to go up, or you wait 48 hours for your website to, you know, start working again? This is, I find that problematic, right? On the other hand, of course, it would be quite unreasonable to expect that every news website or investigative journalism outlet or, you know, human rights organization would roll out their own self-hosted infrastructure that would be the size of Cloudflare so that it could ingest or handle huge spikes of traffic and gigantic d-dosses and all of that, that's not gonna happen, right? Those organizations are sadly extremely underfunded, they're running on shoestring budgets. If there are any founders here, please, by all means, fund those organizations more, especially their tech departments, they really, really need this, and funding tech infrastructure in those organizations, and information security in those organizations, somehow is not considered sexy, whatever that means, please reconsider if you have the power to fund those organizations a little bit better. But yes, infrastructure is expensive and no single organization will be able to, you know, deploy a large enough infrastructure to be able to withstand huge spikes of traffic on their own, this is not gonna happen, hence we have Cloudflare's and Akamai's of this world. But maybe those costs could be shared, maybe we could pool those resources, right? Maybe we could have multiple organizations saying, look, we have an infrastructure, usually our traffic is, you know, let's say 10% of our capacity, right? So we have 90% margin in case our traffic goes up. Let's say we have 10 of those organizations pooling resources and saying, so now we suddenly have access to the bandwidth of all of those organizations. If shit hits the fan on our website, we can rely on this bandwidth, right? So that's kind of the idea that I have of trying to create a piece of software that would allow us to build a community CDN that multiple organizations could come together and say, you know, we're using this software on our website and if our infrastructure goes down, the website will continue to stay up thanks to the other organizations in our little cluster, in our little community, right? And the way it would have to work is pooling the infrastructure and resources of multiple organizations and originally no new infrastructure investment should be necessary, right? Because that's not gonna happen. You're not gonna convince 10 NGOs like, hey, how about you invest a lot into this shiny new thing to build this whole new infrastructure? That's not gonna happen, right? So this has to work on the infrastructure that they already have if they're self-hosting, right? There has to be minimal organizational overhead, no new entity or organization or anything like that that needs to be set up and saying, okay, we're gonna set up a new organization, put all of our infrastructure budget in that organization and that will handle the infrastructure needs for all of those organizations in the cluster. That's not gonna happen because that's too much organizational overhead, right? Those organizations will not agree to do this because it also gives away a little bit of power, it also gives away a little bit of control and that's kind of like, hmm, difficult. The thing that I'm thinking about and I will start describing it in a moment would kick in only when the site is down. The reason why I think this is a good thing and not everyone might agree, the reason why I think it is a good thing is because you can tell all of those, let's say 10 organizations, look, we're building this cluster, this magical community CDN cluster thing but until the websites are running fine, you will not have any new bandwidth costs, right? No random requests will start hitting your infrastructure immediately. It's only when something is actually happening with website of one of those organizations where when the bandwidth of other organizations is starting to be used, right? No TLS private keys shared and no other kind of benevolent monster in the middle situation, right? If you're using Cloudflare, Cloudflare sits like, terminates your TLS. So it can technically see your traffic and all of this. This is not something that I would be okay with and many organizations that I'm thinking about of like suggesting that they should use this kind of approach would also not be okay with, which is one of the reasons why some of those organizations are just not using Cloudflare. So I also want to avoid this monster in the middle situation and it absolutely has to be transparent for visitors in the sense of no special software or a special setup needed on the part of visitors, right? A person who had been visiting your website should, for them, the website should just work, even if it doesn't. I know it sounds weird, but trust me, we're gonna get there, right? And the reason why this is important is what I said before, right? Millions of people in the country will not download a Thor browser. Millions of people in the country will not download your special snowflake solution to whatever problem you're trying to solve. You have to, like, my aim is to try to figure out a way such that visitors don't have to do anything, right? Website admins deploy this and visitors like, okay, yeah, it works. Oh, it works a little bit slower. I wonder what's happening. What's happening is your website is down and now it's pulling content from somewhere else. But the visitors are like, yeah, okay, it just works. Fine. There are quite a few assumptions here, so this will not work for certain situations. One of the assumptions is that the visitors must be using modern mainstream browsers because I'm relying heavily on modern web APIs and stuff, I will explain in a moment. And JavaScript has to be enabled, so again, not everyone will be happy, right? That's why I said run an onion hidden service for everyone who is not happy with this setup because they will probably have a Thor browser already and they will be very happy to use it, right? As I do. This, the solution that I'll be talking about is not going to work for massively dynamic web applications. Think Facebook or Twitter-like Twitter-shaped things, right? Where the content just flows constantly and all of that, right? This is more, I'm focusing more on somewhat static sites, think news sites, right? There's a bit of content published every now and then and every now and then, of course, can be quite frequently. It can be many times a day or many times an hour, but the, let's say, the shape of content is kind of like, it can be approximated by a static site, right? It can still be a dynamic site, can still do dynamic things, but you should be able to cache a piece of content, right? As a, let's say, HTML file, or statify the site somehow, right? So, yeah, obviously this has to work with any content type, but currently huge files are tricky for reasons that might become obvious in a moment. And video streaming is really not a thing with what I'm building, because I have to somehow limit the scope, right? There's trying to do everything at the same time is going to be difficult. And finally, and I promise this is the last slide about assumptions, it is not meant for sensitive content. This tool is meant for, the point of this tool is keeping public content online. It's not about distributing sensitive content. If you need this kind of stuff, again, hidden-onion services, like authorized hidden-onion services, there's plenty of tools for that kind of thing. And it's not meant for your, for any after-login area of the website. So it's not meant to keep your admin interface running. It is meant to keep your public site running. If the solutions for keeping the admin interface running can be different, or you can have a special VPN, or all of these things. And I'm happy to talk about this after the talk with you if you would like to hear my thoughts about this. But this specifically focuses on the public side, the public content on your website. And splitting it this way, also I feel has security benefits, right? Because if you split it this way, your public site is public, right? But it has no way of interfacing with the admin interface. That means that, you know, a script kitty will not use a WordPress bug to take over your site, right? But that's somewhat beside the point. Okay. I mentioned same setup and I will just go quickly through this. What this means to me, as I mentioned, is limiting plugins and complexity. Static site generators, if it's something that can work for you, I would strongly recommend this if you want your website to be, you know, resilient to reasonably high traffic. Caching and microcaching, I'm happy to talk about this with whoever wants to talk about this after the talk because that's a whole separate ball of hairy things. And that's something I see in a lot of websites that they configure their whatever, WordPress or whatever CMS they're running. They hard code the domain, right? Which means if you're a small news organization somewhere in Caucasus or the Balkans and you get blocked for whatever reason, usually the easiest thing that comes to your mind is, oh, we can just get a new domain and put our website on that domain, right? Send this to our users. Oh, shit, we cannot because the CMS enforces the old domain. And now you have a bigger problem, right? So thinking about this, this is something that I haven't noticed that many people think about. Thinking about this when you set up the website is really important and just say, you know what, I don't need to have the domain hard coded. I don't, really, relative links are fine, right? And that also means that if you move to a different domain, your website will still work, right? Which, yay. Okay, so let's get to the interesting things. When the site goes down, right? We have, you have a news website, the website goes down. Let's say it's blocked because something. And when this happens, you are, you have a chicken and egg problem, right? Let's say you set up a separate domain, your website was, you know, sane. It works on the separate domain. How do you put this information out to the users, right? The best way to inform your visitors that there's a new domain would be to use your website. But your website is down. Kind of difficult, right? So you can use social media, you can use all those things, but for a lot of users, the default place that they visit to get information from your website or about your website is your website. So, eh, chicken and egg. But we can try to fix this with service workers. Anyone is familiar with service workers? Okay, so service workers are this, is this web API that is relatively new, I would say five years-ish, maybe, in production state? And the idea is website sends a piece of code and tells the browser when the visitor visits the website, hey, from this point on, this bit of code handles all of the requests that are done in the context of this website, all of them, right? So all of the requests to the domain and all of the requests that originate from the loaded page to any domain, right? And this piece of code gets cached and this piece of code the browser then remembers, such that even if you close all the tabs, close the browser, all of this, three weeks later you visit the site and what happens is that this code is brought up from the cache, it handles your requests, right? What this means is we can do reasonably interesting things because yeah, as I said, once registered the service worker kicks in the moment a request is fired, including before the website is loaded as long as the service worker had been loaded before, right? So for anyone who had visited your site once, this code is there, it's already cached, it's gonna run, it's gonna be there for a month or longer and it will handle all the requests, right? So this allows us to, you know, obviously cache the content in browser, just say, you know, if you pull this content, cache it and next time the user loads this page, if the page is not working, just show the cached content but that's boring, right? Plenty of people are doing that already and that's kind of boring but you can do what you can also do is you can pull the content from anywhere else. This is your code, you can just do fetch requests anywhere else, right? You can do alternative endpoints, meaning your website is down, a returning visitor comes to your website, right? Service worker gets spun up and the service worker says, hold on, the website is not responding but I know that I can get this content also from these secondary domains. So I'm just gonna pull this content from secondary domains, boom, from the perspective of the user, the website just works. It's maybe a little bit slower because it has to first figure out that the fetch request to the original website doesn't work but like, okay, half a second longer. Oh no, right? So this is what I've been working as a project called Live Resilient and maybe you can even see the URL there, resilience.is and what it does is it implements all of the above, right? It implements the service worker, it implements the plugins for caching and pulling content from random places and it has a configurable order of operations which means we can do something like this, right? We can say, hey, once when you get a request for content related to this website and coming from the domain of this website, please first fetch from the original domain, right? If that works, great, just cache that content provided to the user we're done but if the fetch doesn't work because let's say the backend domain, the original domain is down, okay, do we have it in cache? Maybe we do, great. Maybe we don't, what do we do? We go to alternative endpoints, right? We say, hey, okay, fetch didn't work, we didn't have it in cache but I know that this content should be available in these five different endpoints so I'm gonna just pull it from there, right? And whichever succeeds first, the first successful remote fetch gets cached and boom, the user now has access to this content and again, it just works. The visitor has no clue what was happening behind the scenes and for example, just to throw it out there, possible alternative endpoints, way back machine, right? If your website is on way back machine, boom, you have an alternative endpoint for free. You can literally just fetch your content from way back machine, great, right? You can have public folders on major cloud services, they will hate you for this, one little trick to make cloud services hate you but it would work, right? You just fetch from the public folder and the user just doesn't even have to think about this. This is not my favorite way of doing this but it is possible and of course, any HTTPS host you can push content to really, right? So for example, you could push content to IPFS and fetch it from IPFS gateways, right? Boom, again, you can rely on infrastructure of IPFS to fetch your content, right? Okay, so, multiple alternative endpoints can be configured at the same time, right? So you can have a configuration that says, look, if the original domain is down, you can fetch it from here, here, here, here or here, right? And then a random, every request, a random alternative endpoint is selected to pull content from and several can be used simultaneously. So you can have an end out of M situation of saying like, I have five configured, try two of them randomly, just in case one of them will be down, right? So you're making more requests but you're also maximizing the chance that even if one of the alternative endpoints is down, the user will still get the content. This is probably the question that you have in your heads right now. What about like, okay, but those endpoints have access to your content, they can modify this content, they can be malicious, right? Oh no, what do we do about this? Sub-resource integrity to the rescue. Anyone here is familiar with sub-resource integrity? Okay, so what sub-resource integrity does is you have some kind of a resource, let's say a script that you're pulling from, let's say, Fastly, because that's what was made for, it was made for those large CDNs. And you use the link, but you also add the integrity hash. So in HTML it's just like, script, ref, blah, blah, blah, and then integrity equals, there's a hash. And the browser, what the browser does, and it pulls the content, verifies the hash before giving the content back to the client side, right? Which means if you have the hash of all the content, you don't have to care, because if somebody is maliciously modifying the stuff on the alternative endpoints, your browser will just say, no, sorry, the hash doesn't match, bye-bye, right? But this again, so, but in HTML, we can only set integrity attributes on script and link elements. Thankfully, in JavaScript, we can set integrity hashes on any request. And because we're doing this in JavaScript, because it's a service worker, blah, blah, blah, all of this, we can set integrity hashes on every request. So I tested this with video, I tested this with images, I tested this with all sorts of content, and the browser just happily says, okay, I'm pulling some content, hash matches great, here you are, hash doesn't match, well, screw off, right? And yes, there's already a plugin for this in Libresilient, so that's already implemented. But then, that's not that easy, right? But then how can you distribute integrity hashes safely without having to trust alternative endpoint operators, right? Because you have, the service worker has to get those hashes somehow. It can either have them pre-configured, which is boring because now you cannot push new content, or it can pull them, for example, from those alternative endpoints, but we don't trust those alternative endpoints, that is the whole point of distributing those hashes. What do we do? What do we do? Asymmetric crypto, right? We can, another plugin already implemented, verifying fetched content using signed integrity data, right? So what happens is, initial setup of your website, you generate private and public key pair on your server side, you configure the public key in Libresilient's config, and you use the signed integrity plugin. And the signed integrity plugin, during publishing, what you do is you publish, you generate the integrity hash of every piece of content image, whatever you're publishing. You put it in a Java web token, magic, magic, magic, signed with the private key generated before, and you put the JWT in a file available under whatever the URI of the original content is, dot integrity, right? And this is what this plugin expects when it's working in the service worker on the client side. When it's fetching content, let's say it's fetching favicon.ico, right? What it will do first, it will fetch favicon.ico.integrity, verify the signature, get the hash out of it, verify, pull the favicon.ico, verify the hash, boom, it works or it doesn't work, right? So what this means is we can have content delivery even if the original website is down thanks to service workers. We can verify the fetched content has not been tampered with thanks to suppressor's integrity and the signed integrity plugin. And it all works on regular browsers, no special settings necessary. If you are using any of the, you know, modern browsers, all of this already works. All the plumbing is already there, right? So what we can have exactly this, imagine 10 organizations that say, okay, we're gonna deploy Libresilient on all our websites and we're gonna serve as alternative endpoints for each other, right? You have a community CDN where no new infrastructure needs to be deployed. The only cost is storage space, right? Because each of those websites has to be mirrored on each of those organizations' infrastructures. That's the only thing. The bandwidth costs kick in only when one of the websites is down, right? Because with this config, it's like, does the website work? It works, here's your content. If it doesn't work, well, alternative endpoints, right? Other org's infrastructures pooling, right? You don't have to trust them that much, right? Because you are pushing the content along with the integrity files, so the service worker will verify this, right? So you don't even have to trust those organizations, like, to not modify your content. You only have to trust them in the sense that they will not maliciously screw you up in the sense of deleting content or anything like that. But if it's a group of organizations doing similar things, perhaps that's not an entirely unreasonable assumption. And it is completely transparent for visitors, right? So everything I promised. There you go, on this black screen that has nothing on it. So, yeah, a group of organizations running their own hosting infrastructure, serving as alternative endpoints for one another. Oof, I think I have about five minutes right now. So, and I have, I think, three slides. This is doable. Not mentioned here. Configuration changes during downtime, right? Let's say your website is down and you decided, ah, I have to change alternative endpoints. Some of those organizations also went down or now don't want to deal with me. I set up new alternative endpoints. What do I do? How do I tell the users who have already loaded the service worker that there are new alternative endpoints? Well, if at least one alternative endpoint already configured works, you just push a config update. Boom. This goes through the same plumbing as any other request because it's just config.json file. So, config.json, config.json.integrity, boom. Any user who has visited your website before has the service worker loaded. And as long as at least one of the alternative endpoints works already, pulls in the update and boom, has access to the new alternative endpoints or the new config. I have not talked about deployment and publishing pipeline because that depends very much on the setup of each website. If you're using a static site, there is going to be a bar script and make file, a GitLab CI CD pipeline or something like that that pushes content to your site. Well, we can just add some steps that pushes this content to IPFS or your friendly organizations or your whatever Google Drive at the same time and calculates the integrity files along with that. There is a FAQ. Can you read the link there? There's going to be a more visible link at the end. Current status of Libreslient. Code exists, works, and has decent code test coverage. All of the plugins that I was talking about today have 90% or more test coverage. So yay. Documentation largely exists, but definitely needs improvements if somebody wants to read through this and tell me, oh, no, I don't understand how this works. This is weird. Then I'm very happy to hear that and improve because when I write this documentation, I'm all in my head. I understand all of this. I don't know which things might be completely weird or surprising. This has not yet been deployed in production on any large site, but I know of two reasonably large projects that are testing it and might deploy it at some point. But testing improvements, ideas, criticism, all of this is very welcome. Please come at me. And the project got a small grant from NLNet, which I highly recommend. Those guys are also at MCH, and you should find them somewhere there, sorry. And they have more money for interesting projects. So you also should try with your project if you have a project. Possible next steps, WordPress plugin. Because why should WordPress not be able to push stuff along with integrity files and all of this to some places, that makes sense. Reverse proxies can be alternative endpoints. I haven't used them. I prefer the static approach. I prefer the approach of here's content files because this decouples the alternative endpoints from your back-end server. Meaning if you're using reverse proxies, if your back-end server is down, your reverse proxies are probably also down unless they do some interesting caching, right? But this is totally doable and totally crazy ideas. And this is almost the last slide. Why not use IPNS and IPFS as content transport directly in the browser? Well, there is a bug with IPNS. And I could rant about this over beer because it hasn't been fixed for three years. But once they fix this bug, we might be able to just use IPNS IPFS directly without hitting IPFS gateways as, who, alternative endpoints? I want to test web torrent as content transport. Again, not alternative endpoints, just a completely alternative transport plug-in that just pulls stuff from the cloud. And merge requests are welcome. And other non-standard, decentralized, weird, fascinating transport plug-ins. I'm all ears if anybody wants to talk to me about this and figure out things to do together. And the website is resilient.is. There's a blog. There's documentation. There's code. It's a very simple website. Thank you. Questions? Oof, I made it. Wow. You still had one minute. Awesome. Oh, I'm sorry. I could have spoken a tiny bit slower then. Yes. We have a fast but understandable. All right, we have questions. Microphone in the front, please. Any ideas about first-time visitors, like how to solve this problem for first-time visitors? No. No, but again, from my experience of running these kinds of websites, most of the visitors are returning visitors, usually. And so I found a way to solve this problem. I haven't found the way of solving the first-visitor's problem because you have to load this code somehow. I would love to have a way of, OK. So let's say you have a static website. Let's say you have static website with relative links. That means you can just zip it. You can literally just distribute zips on thumb drives on a sneaker net. I would love to have a way of distributing this code with the zip and saying, hey, if you receive the zip file to experience this website, click this. And now you can go online, and it will just pull stuff. But that's not how this works. Unfortunately, service workers can only be installed either from local host or from HTTPS endpoints. And so if I find a way to go around this such that I can install a service worker in a user's browser from a zip file, then awesome, right? But yeah, that's a very, very valid question. So this will also break if we finally teach people to clear the browser cache and browser profiles. I think we don't have to worry about that. I don't think they will ever learn. But yes, this also used to not work in Firefox private mode or incognito mode or whatever they call it this week. But I think they enabled it, right? So yes, there will be browsers and there will be configurations where this will not work for users. I usually use most of my browser with JavaScript completely disabled, so that would not work for me. But also people who have JavaScript mostly disabled also probably have Tor. So yes, onion torsites, onion hidden services, please. Yes? First thank you for the talks. And I didn't get one thing. The idea is that a group of organizations stay together and install and configure Libresilient and so to have kind of many island where many different group organizations can back up each other. So yes, so each website. Let's say you have 10 websites, right? Each website has Libresilient configured and has, OK, this is my main site, but all of the other websites, like infrastructure of all the other organizations, yeah. The question is, do the organizations have to find each other somehow? I mean, if I want to install Libresilient, how can I find some other organization that want to work with me? So Libresilient itself is just install it on your website and you have to figure out where your alternative endpoints are and where you're pushing your code and all of this, right? This is up to the website admin to figure out what they can rely on as far as the alternative endpoints are concerned, right? But what I realized after I wrote most of this code is, OK, but if we have a group of organizations, they can rely on each other, right? They can pull their resources together using Libresilient in a way that I don't think was possible before or I have not seen this kind of resource pooling, which does not require either a separate entity or some kind of TLS private keys sharing situation or anything like that, right? So that's I think the innovative part of the talk, right? But yes, for Libresilient itself, you just have to figure out your alternative endpoints yourself. I'm going to be a bit anti-HCTP because I think it tends to reinvent wheels all the time. Most other protocols use SRV records for this purpose. And I know the HTTP workgroup in the ITF is working on a different record specifically for HTTP, which also allows you to prioritize where you come from and even segments of your website can be separately redirected. So there will be a general solution that might even end up in browsers. Insha'Allah, but it's not there yet. There are things like alt-NVC. And you want to define SRV records. Yeah, yeah. But the browsers will have to read them and apply them, right? And it will still have to be specified such that it works kind of like HSTS, which is like, these are the alternative endpoints. Cache this information for the next whatever. And also this solution, those headers, you will not be able to update the configuration on the visitor browser unless your website is up, right? Because you're not getting those headers, as far as I understand. Your DNS infrastructure needs to be open. Sorry? Your DNS infrastructure needs to be open. Yeah, but what I've, sorry, yes, you're right. But what I've seen is something called alt-SVC. I can't remember what the acronym is for. And that's an HTTP header that specifies alternative endpoints kind of sort of thing. And this could potentially be a solution like this. But again, you would not be able to update the visitor's config when your website is down, right? With the DNS, I agree, right? But also with the DNS, if your website is blocked, then DNS is probably blocked, right? And this doesn't rely on DNS, right? Because you've loaded the service worker. The service worker has all the information it needs already, right? OK, any more questions? Wow, clearly a boring talk. So there's another problem. If one of the organizations in the circle gets compromised, suddenly you have one organization that can push malicious content that gets hosted by everyone else. Well, it can push this content, yes, right? But I would imagine that if you're running an organization that is part of this pool, you would have a domain for your own stuff, websites, et cetera. And you would have a separate domain for the alternative endpoint stuff in this pool, right? So of course, if I were running an infrastructure of one of those organizations, I would separate those things very well, right? Because I don't know what's going to get pushed into this, right? But from the perspective of a website operator who's relying on those alternative endpoints, the sub-research integrity and signed integrity just solve this problem. Because if somebody pushes the modified malicious whatever code to their alternative endpoint that you're going to use, they will not be able to sign the integrity package, right? So the service worker just say, oh, that was some kind of a bug. It doesn't matter. Yeah, but there are some types of content that you can get a website down, like Police Raid, just by the presence of the facts. Oh, yeah. But that's why I use the word community, right? That you have to have a community of organizations, and then you have to trust each other a little bit, right? It's not a completely trustless. And I will save you now. Yay! So please, as we are at the end of the time for this talk, please give our speaker another very cool round of applause. Thank you. Thank you. I have stickers. If anybody wants stickers, I have stickers.