 I grew up skateboarding, so I keep looking at the stage thinking skate park, but I think it would probably break if I tried to skate it, so I'm going to not do that. My name is Dietrich. I'm from Mozilla, and I've been at Mozilla for almost 13 years, and primarily, I mean, there's interesting technology, I love the web, love open source and open systems, the question that I asked yesterday, why, is really, really important one, and one of the reasons why I've stayed at Mozilla for so long, because it's an organization that has a commitment to keeping the internet open and free, a place that's accessible to everyone, promotes civil discourse, and elevates critical thinking, amongst other things, but that set of clearly articulated values, those reasons why are ultimately the most important thing for me. So today I'm going to talk a little bit about how we apply those values, how we make a decentralized or distributed web that reflects those values, how the threats that we have against the web today, and also some of the challenges that exist in changing the web that we have today, and also talk a little bit about one of the projects that we're building in this space. So this is Mark Sermon, he's the executive director of the Mozilla Foundation, and I think it was like 2010, he did a talk, and actually I tried to go back and watch this talk, it is on YouTube, but apparently in 2010 they didn't have HD cameras, and the quality was so blurry, so just to paraphrase, the bit that really stuck with me from that talk is what does it take to be a 100 year organization? How do you develop technologies, platforms, systems that last, have that type of longevity? There are not a lot of non-governmental organizations that have been around that long, and the web is what, 20, 25 years now, and it's facing serious problems, the internet can be in hospitable place at times right now, with surveillance capitalism and hate speech, and the internet being used as a disinformation platform by governments. So what do we need to do to make the web a more hospitable place? By any measure the web really is a success. There are estimated 4.5 billion web pages, and estimated 1.5 billion web sites. This is an unbelievably large amount of human activity that we put online. In contrast, the combined app stores of Apple and Google are estimated to be around just over 5 million apps, so this is actually not to scale, a fair rendering would actually make those Apple and Google icons invisible and you wouldn't be able to see them. With that amount of human content, of human creation online that browser vendors are responsible for though, it really is a responsibility, and we take that very seriously. We need to make sure that as much of those 5 billion, almost 5 billion web pages can still work, that they can be read by everyone, that they still function. Along with that success come some challenges as well. People who make the web are really comfortable with how the web works. We're really familiar with how browsers work. They're well understood security models, well understood set of capabilities and constraints. What do we need to change, and how are we going to be able to change that? There's some resistance to change, especially in talking about how you implement distributed and decentralized technologies on the web. Some of our biggest fans are some of the most challenging when it comes to talking about things like Ethereum, things like blockchains, things like distributed applications. But as long as the web stays centralized, we need to have these conversations. We need to figure out how to really move forward. Some of the bigger complaints, like how do you imagine a world without URLs? The URLs are a core part of what made the web powerful, accessible, universally available. But ultimately, a lot of people don't understand actually how they work. Us as technologists, we know how they work, and we understand the importance of them. But this video, as Google went out on the street at one point and asked people what a browser was. And it's kind of a famous video, because it was a really, it was an inflection point as far as understanding, like, do people understand how the technology they use actually works? People don't understand what a browser is, what a search engine is, what an operating system is. It just all gets matched together. It's the stuff we use. It's how I do my thing. Right? I don't know. I just post stuff. Somebody puts it somewhere. People don't actually understand how this works. Another one that is really hard to imagine living without is domain names. How do you live without what, where's your company exist? How can I find you? DNS is one of the most, I guess, easy choke points in centralization. You know, Indonesia turned off Reddit with DNS, Turkey turned off Wikipedia, DNS. Very easy centralization and censorship choke point. Another one of the cultural challenges is talking about protocols. Protocol bros. If you're talking about a protocol, it must mean that you are not addressing real human problems. Real human problems start with where human needs actually exist, and that's at the front end. With the fundamental misunderstanding of the fact that we are really sitting on the shoulders of giants when we're using the internet at all, set of protocols that enable all this to begin with. Another one is blockchain bros. If you bring up the word blockchain in certain circles, certainly it must mean that you are over-engineering and there's absolutely no need for this at all. When I found myself actually starting a sentence, well, not all blockchains. It's not a good place to be in. And arguably, there's some good reason for that. There's some justification for that and applying the same tool to a number of different problems. Is it going to get anywhere? But for this, we do need to reevaluate the network architecture of how browsers work. So what does that look like? What is a, what is re-decentralizing the web actually look like? In the web architecture that we know today, a browser loads a URL on your behalf, makes that request to the server, and the server returns a response. And the browser passively accepts that response. And that's pretty much how things go on the web we have today. That is the centralized, single, two-points architecture of almost every single browsing session. Even if you're loading up IPFS in there, it's going to be centralized somehow, you're not actually bootstrapping a node from the client. Every proxy to a decentralized or distributed service is still just a centralized proxy. There are a number of problems with this, which we talked a little bit about already, easy to censor, all the different problems where they actually are just daily annoyances. Sites go down, sites get DDoSed, a business goes down, goes out of business, and the site goes down, or they get bought by a bigger fish. That's a service you love so much. It suddenly disappeared, poof, never to be seen again after being bought by one of the big tech companies. Ultimately though, it comes down to the fact that all the power resides on one side of this relationship. And the browser itself doesn't have any say, really, in what comes back. The server ends up being kind of like that friend you go to meet, and they always seem to be the one that decides what you're going to eat and where you're going to meet, and you start talking to other friends and you realize that they're also being treated this way by this friend, and then you hear that they've been actually taking notes about what you've been talking about while having dinner, and then you hear that they've been taking those notes and selling them to people who really aren't your friends, and then you end up one morning in a refugee camp on the border of Myanmar or a victim of genocide, because that friend was Facebook. These are real problems that we need to solve and that are going to be really difficult to solve with this architecture, incredibly difficult to solve, as long as we have that centralization choke point. We won't be able to get past this point of massive structures, massive amounts of control over human activity. So let's fix it. Let's make a distributed web browser. There's a lot of projects in the space, a lot of people try to figure out what the right pattern is, what the use cases are, how do we solve this problem, how do we create a web that gives more control back to the user. For the web to truly, for a user agent actually, to truly live up to its name, we have to change these network architectures, and these are some of the projects that are experimenting in this place, but like I said before, we need to think on the 100-year scale. We need to think about, well, actually as a browser vendor, we're just thinking, any changes we make, we have to own that code forever. We can't break the web. We can't change things too fast. We can't just be like, oh yeah, web v5, everybody upgrade, right? We have to support that five billion web pages. We have to make sure that that human output of human activity and creativity stays usable and still works. So we can't just pick a winner. We can't just say, all right, IPFS sounds good, let's run with it. I think also picking a winner, deciding that this one solution is the best, really narrows the set of use cases that we can address. And answering that question about how do you build a system, an architecture for longevity, for a century long longevity, means you have to understand what the broader set of use cases are that all these projects are trying to address. I think success looks like not a winner, but many islands in the sea. A healthy ecosystem is one that really props up and supports and enables experimentation change. It enables the ability to adapt as human needs change, which is going to happen over 100 years, right? So we started looking at what the building blocks are in a lot of these distributed systems. There's a swarm, there's CRDTs, there's web servers. Should we, there's a lot of key management issues. Do we just put a blockchain in the browser? See what happens? Do we turn the browser into a server? Let it go and see what people do with it. These are maybe high level building blocks. So what are these built on? Well, a set of very familiar, well understood, but pretty boring technologies. These are things that we've been using for 20 years or more, right? But you can build on top of these things, those other higher level application primitives. Every single blockchain is built on top of these lower level primitives, network architecture primitives, things like access to file system. These are all things the web doesn't really have today. The web browser has been a client for 20 years and nothing but for the most part, right? The few different experiments and things that didn't really, really last implemented things like discovery. But there's no real file system access that works across all web browsers, for example. So if these are the kind of lower level primitives, how do we get these into a place where people can start experimenting and building those higher level primitives in the web? This is kind of the world that I live in as somebody who works building browsers. There are these three basic execution scopes. We could, okay, build distributed application capabilities into Firefox itself and just make the browser part of the application where we implement these types of features. It'll be slow going. We will be able to meet all the needs of these projects that I've been talking about have today and the needs that your projects have today. So that's not ideal and we are resource constrained. We're probably the smallest. Okay, no, we definitely are the smallest of the web browser vendors building major and used browsers today. Do we wanna put UDP APIs inside of a web page in JavaScript? That also sounds like maybe not the right level of abstraction and browser extensions which you're probably familiar with from installing MetaMask, it might be the right middle ground, it might be the place where we can say expose some of these capabilities but without over committing too much with letting people experiment, letting projects like yours experiment but without deciding that, okay, the web is gonna have this API and live with it forever and without really trying to own or control what that distributed web looks like by just building it directly into Firefox. So this gives us some extensibility it gives you some capabilities in a way that we can is a little bit more malleable that we can experiment with. So I'd like to introduce LibD web. This is a project that we've been working on for the last couple of months that adds some of these basic primitives to web extension APIs. It so far adds the protocol registration so you can register custom protocols, TCP and UDP APIs, MDNS, MDNS and file system APIs talking with the projects in the space, key management seems to be really important as well. There are a number of changes happening how Firefox works and stores keys at the OS level that maybe we'll be able to take advantage of. But as soon as we implemented this, the IPFS project took it, used their IPFS companion add-on and migrated it into this experimental APIs and actually got a full IPFS node running inside Firefox. And one of the first things they did was start serving Turkish Wikipedia directly between two browsers, which is amazing. And it means that we're onto something that maybe these capabilities at the lowest level will at least give us all these projects and your projects room to play, room to figure out what that next generation distributed web browser looks like. So where we're at now, we've prototyped a number of these APIs. We've proposed them to the Web Extensions APIs team, sounds all right. But of course, this is just one web browser, right? The web works for everyone because it works across multiple browsers. We need to build some momentum around what we wanna do here. And we've seen that just doing even a little bit publicly is actually pretty powerful. Earlier this year, we whitelisted a few different protocols, IPFS and that and a few other ones. And people were really excited about it. Got a really big response. You couldn't actually do anything. We just whitelisted the protocols. There's no way to actually handle it on the other end really. It was more symbolic, but now several months later, Chrome also whitelisted those protocols. So you start pushing at the edges. You start little bit by little bit, looking at what features people need, looking at where the right places to push are and hopefully making progress. And if any of you worked in web standards or any standardization process, you'll already know that this kind of how things work anyway. There's not like a group of people get together or write down a spec and then all the browser vendors go off and implement it. Most often, these things get proved out over time by domain experts giving their input and maybe some browsers experimenting in some ways. So this is really actually kind of looking typical for how browsers develop. The next steps, we're getting privacy and security review right now or in this process for the protocol registration API, which is one of the trickier ones. Things like TCP sockets pretty well understood, not anywhere in web browsers generally, right? But we understand what they do and what the effect they're gonna have, especially in an extension. But when you register a custom protocol, all of the rest of the browser infrastructure gets pulled in and all of the browser security model, we make decisions around what permissions a webpage has to your data, to different features based on the origin, the URL that we loaded and the domain name that it came from and the security context that it has based on the SSL certificates that came with, right? That actually are encoded in those certificates is that domain name. So what does it look like? What is that security model? What is that origin when you load an IPFS hash? It's really different. So we're tackling this API first because we think it's gonna be one of the more challenging APIs to really figure out what all the downstream effects on the browser are going to be. I've been the last couple of weeks, we've been threat modeling. If you're not familiar with threat modeling and I know there are a couple of security workshops at the conference, threat modeling is when, instead of saying, okay, we know that it's a fantasy to say that any software will ever be secure. There's no such thing as blanket security. The important part is actually understanding what the threats are. What are the attacks on your software? Being able to identify and articulate those attacks and then design mitigations and protections for those attacks. So we're going through and doing this process, this threat modeling for each one of these APIs and it's very exciting. I've been applying it to my personal life, like I have an early flight tomorrow. Threat, I might miss it. Mitigation, not drink so much beer tonight. Go to sleep early. I'm going to die of heart disease. Threat, don't eat all the cake. Again. So it's a really useful process. I highly recommend you apply it to your software projects and your personal life. One of the other things we're doing is talking to a lot of these projects. What are these projects who are building distributed and decentralized web browsers or browser-like things, what do they need? One of the ways to do this is a W3C community group. The community groups of the W3C is an interesting construct because you don't need to be a member of the W3C to join. Anyone can join and anyone can participate in the discussion. So it creates a place where vendors, software developers, interested parties, domain experts can share their projects, share their experiments, but without any explicit commitments to implement anything. And this is, I think, is the right level of abstraction for the communication aspects of what we're doing. This is the Are We Distributed Yet project, which was, I think, initially started by IPFS folks. And as you can see, issue number 27 is where we're having this conversation. So if you're interested in participating in proving out and forging what a distributed web browser might look like, this is the place, please come and join. Hang out. So this weekend, coming weekend, I'll be in Berlin for IndyWebCamp. IndyWeb community is fascinating, awesome group of people. They're actually one of the people that say, if you're talking about protocols, it must mean you are ignoring real human problems. So we always have interesting conversations when we get together. But their set of principles for building software, I think is pretty fantastic. There's some parts of it that I think are really useful when looking at the things we're building and looking at them honestly. One of the most important, I think, is use what you make. If you're not using your software, why would anyone else want to use it? And that is often, especially for the kind of things that we're building, really good lens to evaluate how effective the software is that you're building. Build for the long web, like we were talking about, how do you build at 100 year scale? How do you design systems that are robust? The web was designed with legacy built in. And I feel like that's been a pretty effective way of future-proofing it because everybody wants access to those five billion, near 4.5 billion web pages. What are the characteristics of systems that can last that long? The idea of plurality as well, that are there multiple implementations of your software? Even if you don't agree, or they're competitors, the fact that multiple people can implement an interoperable version of the software is a testament to its strengths and also is a great way to identify maybe the things that you've been in a bubble designing and might not be right. So a good way of identifying flaws, other problems that you might not see in a single implementation. Some of these lessons I think are really important in getting the distributed web to the place where we have billions of applications. Missing slide. There we go. Yeah. So this is exciting. There are real users using dApps today for real things. This I think is probably the most important signal of whether or not what you're building is actually working. If real people are actually using it to solve human problems. Maybe those human problems are managing and feeding digital animals. That's fine. But if they're doing it all the time, then that's part of the course of our human life and activity. Means you're meeting a need. And getting that world to the same billions to at least 25 years. I think that focus on human activity, human needs and problems is really important. And I think that focus is what's gonna get us back to an internet that is hospitable again. An internet that does meet human needs is a place where we can have civil discourse. Where we can have trust again. Thanks. Happy to answer any questions if we have time. Looks like we have some time. All right. Thank you very much. Oh, sorry we got one. What would you say are important metrics to measure dApp activity by? Because I know there's quite a bit of dApps that claim they have users, but it's just like bots basically. Crypto Shrimp was a really good example that where they had a lot of users, but it was just a small amount of people basically writing bots. Yeah, so I can't answer that question, but that seems like this community of people should be able to answer that question, right? I think that's gonna be critical to having trust in things like dApp marketplaces. Trust in systems, especially if you're investing real money, things that have value to you, whether that's currency or not, into those systems. Having at least an agreed upon standard for how to do metrics for distributed applications is really good. Things like IPFS and even dApp protocol are pretty wide open. Like you could see who's asking for what for good or ill. It's really strong in some ways and also can be a real problem in some other areas, but building it into your protocol some way of being accountable in a meaningful way is probably gonna be important. But how that works, like you said, it's really challenging. It's really different than doing metrics on the centralized web. Maybe you could argue that actually having not trustable numbers reduces the threat surface because right now metrics on the centralized web are kind of how we ended up in the surveillance capitalism place to begin with. Any more questions? Thank you. Hi, my question is how much of what you're doing now of what you described with IPFS and things related to the distributed web you are doing in collaboration with other browser companies? Right now this is just us working with a few of the different projects that I listed. Mostly entirely in OpenRepos. Pounddweb on irc.mozilla.org is where a bunch of people from different projects are hanging out. I don't know if other browser vendors are working directly on these types of APIs or not. I know that the Chrome did whitelist those couple of protocols. I don't know, I'm not clear on what that actually means in the context of what they did if it does unlock any other capabilities in their extension environment or not. I think my understanding was that it just did what ours did initially too, meant that you could load a webpage that was hosted from within the extension itself and not actually outside, but I'm not totally sure. Yeah. Thank you very much for Mozilla to take a lead in having the browser support some of these decentralized web, we're depending on it. What do you need from us to try to get more support internally from Mozilla? We need you guys to succeed at this. I think that success story that I showed where somebody from IPFS just in a couple of days took these experimental APIs and showed really immediately and directly how they were able to apply that capability to a real human problem of censorship of Wikipedia and Turkey. I think those stories are really important. So seeing more of those successes. I think the second thing is actually helping us figure out what these applications need. Initially we were like, okay, we need some network architecture changes, we need some more network capabilities, but really quickly it turns out things like key management are still a huge problem. And I think that problem in particular will show up strongly in the mass adoption and adoption at scale of the type of applications that are here at DevCon. So input and help in figuring out those problems, best practices and how maybe key management has a place and where in the architecture of the web would also be helpful. Yeah, build stuff, share your stories. Any other questions? All right, thank you very much. Thank you.