 My name is Corey Petty. I'm going to talk about how to ethically build public good infrastructure, specifically focusing on the ethics. It's not going to be as many details. If you'd like more details on kind of our history and the current work we do at status and sister organizations, you can, I'd recommend watching Oscar Thorn's talk. He did earlier today. So if you missed that, check it on the stream. But for those of you who are leaving, if you wanted CLDR, assholes exist. Web 3's main primitive is to minimize their influence on everyone else. That's the whole goal, I think, that what we're trying to do in building decentralized technology is mitigating those with power and their influence on those without it. And the ethics of building this stuff, this infrastructure, has a massive impact on our success in doing this. So process of how we think about building these things and how we make decisions in the process of building these things, tremendously impacts our ability to mitigate assholes. So a little bit about me. My name is Corey Petty. I said that at the beginning. I started out doing podcasting in early 2015 and research a couple of years before that. I joined status four to five years ago. It's a blur at this point. Doing security research and security stuff across the organization. And since this year, I've moved into doing kind of coordinating all of the infrastructure projects, which we've recently expanded into. You'll see kind of a start to that work in the later half of my talk. You can reach me in all these different things, corepetty.eth on status, just search for Petty and corepetty on Twitter. So status is an organization that's founded on a series of principles. We wholly bought into what we believe to be the ideal Ethereum principles for building public good infrastructure. I'm not going to list them all. You can see them. And in some cases, these things are a little in conflict with each other. But as an organization that's founded on these things, they serve really, really, really well as how we argue about trade-offs when implementing things or developing things or trying to understand how we'd like to do it. And I'm going to do the normal thing where I give you the Wikipedia definition of something, and that's public good infrastructure. And that's a good in which all enjoy in common. In a sense that each individual's consumption of such a good leads to no subtractions for many other individual's consumption of that good. And this is Paul Sanderson. So we're trying to optimize infrastructure where it's not a zero-sum game. And someone can't take advantage of others or in the process of someone being successful, it doesn't detriment the use or experience of someone else that has nothing to do with that work. But, as I said earlier, assholes exist. There are always people who seek to profit for themselves on behalf of others because they don't care or they don't know. And our goal, like I said in the beginning, is to mitigate that influence as much as possible. And I've come up with somewhat of a law. I'm sure this is something more general, informal, somewhere else, but every community has assholes. There's always someone in a community that cares more about themselves than the others, is willing to do things for themselves to the detriment of others. And as things grow, the likelihood of these things also grow. And more often than not, these people are usually very loud. Now, what a community is, is generic as you could think it to be. It's either a blockchain ecosystem, a government, your local meetup, your friends' group, whatever. We all know kind of who this person is. And I imagine there's probably a lot of assholes in this building as well. So, like, we all kind of get the idea. Here's some. This is an example of what the influence of power dynamics can have on a system in which it tries to mitigate it but doesn't quite successfully do so. And I want to make it a point that we're not seeking to remove power dynamics. That's kind of a natural thing across society. We're humans. The world is unfair. We each start off with a different set of circumstances and we change differently. But we're really trying to flatten its effects. We're seeking to remove the ability of someone with asymmetric power to impact those without it. And this can be summarized by this kind of change of phrasing. We've all heard in the beginning of Google their phrase of don't be evil. And we loved it. That's one of the reasons why I enjoyed Google in the beginning was they had this kind of ethos and model of, like, seeking to not try and take advantage of others in the process of building on infrastructure in certain people. And I have to give credit to Dr. Mani Belli for introducing this exact phrasing to me back in, I don't know, one of the early consensuses in New York is, and what we're trying to do is change don't be evil to can't be evil. Because we've seen what happens when the model is don't be evil. You have the option to do so. And you continue to have that option. You get to selectively choose when it's economically feasible for you to not be evil. Or maybe there's a change of guard and everything that's set up that had that ethos changes when that new change of guard doesn't really quite care the way the original one did. So we're seeking to do this. Don't changing don't be evil to can't be evil. Don't give people those options. And this is an example of when that fails. This is a MBV watch dot info, I believe. Yeah, I think so. And this is what's currently happening as the as the result of the sanctions being put down on tornado cash. Now, I'm not going to opine on the original picture of someone being held captive for imprisoned for a long period of time. But what we can see is that tornado cash was sanctioned, the code was removed, and put back on subsequently. But we're seeing OFAC sanctions having an impact on what we thought was central like, since your super system technology, where this is the list of all of the blocks currently done since the merge. And those of the if you look at the the gray here, the gray is all the ones that are not using a service and the validators called MEV boost and those that are, are the ones that are the portion of those blocks that are OFAC compliant. They're censoring transactions on the public on the public Ethereum blockchain in compliance with these tornado cash sanctions and the ones that aren't. And I think if you look at that, it ends up being quite a bit. So 32 of all transactions put across the blockchain network are OFAC compliant. And if you look at just the ones going into MEV boost, the massive majority of those are. So we're seeing censorship at the base layer. And this can only be done because we can see what those transactions are. And you're able to make a distinction and do transaction ordering because you're able to see those details. Now, that means you're given the ability to look back here. Maybe you have the ability to make a decision that you're choosing don't be evil and not can't be evil. So it makes it's showing that we need more privacy and censorship resistance. Well, I'll get to that in a moment. I don't know why that moved that. So we're seeing that we need more privacy and censorship resistance at lower and lower layers. If we can't just do everything on the blockchain layer and then ignore it on the ones below. And if you looked at the talk before this, you see that like, that's a really hard problem to solve. There's a lot of like trade offs and difficulties you have to do to try and figure out what is good behavior. And how do we make the appropriate trade offs to then maximize that, right? So I'm going to take a slight turn here to talk about kind of how you can maybe think about structuring this argument and where it actually comes into place when you're making decisions on like how to build infrastructure and where you need to focus your principles. So this I keep coming back to this phrase, the medium is the message. And it's a general idea that the technology you use the medium has a drastic impact on your ability to convey a given message. When we try to put something out, some idea in the world, we think about it in our heads, we form it in some way, and we send it out into the world via some technology or formalism or whatever. And it's up to the receiver to decode that appropriately. And it's also, while in transit, potentially being manipulated. And that's where a lot of that's coming in. I'll make a kind of a case for that in a moment. But this wonderful picture by Monica and the crowd here is kind of like this example of what it looks like when you don't take this into account. And you funnel everything, all human relationships into a single medium, in which kind of takes out all your optionality and how you can build things. So this is kind of like all of the complexity of human relationships and different things coming through what I would see a meat grinder, everything comes out uniform and the same. Currently, this is what I would consider centralized infrastructure in which we try and take everything that we humans do to communicate and make digital relationships than lives. And what it turns out to be when we try and mimic that stuff in a digital environment. So to dive into that a little bit deeper, there are three parts, three layers of any message. I want to try and explain what these are, and then point out kind of where the subtlety and the evil comes in. First, we have the frame message, which is the concept of giving information that there is a message that says, hey, I'm a message to code me if you can. It's the ability to identify that a message even exists for you to think about. And it implicitly conveys in the structure of this message. This is a book. So you know what a book contains a message. That's a good, decent example. And to understand the frame message is to recognize the need for some type of decoding mechanism. I've identified that there is a message, and now I need to find a way to decode it to understand what that message actually is. Next one is the outer message, which is the medium used to convey that message. This is how the message was sent out in the world. To understand this, the outer message is to build or know how to build the correct decoding mechanism for the inner message. So this is, I've identified that there's a message that I'd like to see, and now I need to understand that how can I take this message and understand what the intent of this thing is. And then finally, the inner message, which everyone kind of understands, which is the initial intent being conveyed in the first place. So understand this is to have extracted that meaning that was intended by the sender in the first place. Now, this is a overly simplistic model of what any message is, but it gives you a framework to kind of give you an idea of where things can be introduced and how to start trying to understand how you can mitigate these things. So if we think about blockchain networks as coordination mechanisms, we're using them as social layers to then, as I said earlier, mitigate assholes as best we can, and trust a system as opposed to humans to convey those interactions. And the interesting thing about them that's different from the internet is that they have real role value, which attracts a lot of people to do things that are very greedy. So we have these beautiful cross-border, cross-stress diction, attempting to be censorship-resistant coordination mechanisms that allow us to do all these things we couldn't do beforehand in a digital environment with digital relationships with real-world value. And if we think about that earlier framework of thinking about a message, it's really hard because it's layered in a bunch of different ways. If we look at the kind of stack, a very, very overly simplistic stack of what a blockchain network is, we have networking at the bottom, which is how messages get passed around to then everyone who's contributing to these things can and comes to agreement and has all the right data to figure out what's going on. They then go through the process of validating all these messages to say if they're well-formed or someone's doing a double spin or they get strapped or whatever, and then construct them in the blocks. And then we go through a consensus mechanism where we, as a distributed system, come to agreement like this is all the right one to then move on to from now on, right? And after that, we have to find some way of like extracting that data from this massive blockchain that we keep building. That's a lot of messages and a lot of different mechanisms which have what would be considered an outer message that needs to be interpreted appropriately. And based on anyone in this process's ability to understand that outer message and the powers they have in various places across the stack, give them the ability to change that message or censor it. And most of the time, when we're thinking about adding security or privacy, we're looking at the top two. In validation and consensus, we're basically just making sure it's correct. But what we're seeing now, we like we spent all our time on privacy and the retrieval and kind of there's a middle layer with smart contract and so on and so forth. We're seeing all of our time spinning up here when when things happen and those that the powers that be want to change stuff, they go to a layer above because we didn't necessarily think about it back then, right? So that's kind of an overview of like kind of how to think about public good infrastructure, its complexity and a small example of I don't want to say not thinking properly or building improperly, but like as this thing is grown and ossified, the interactions of not incorporating what I would consider the strongest point, the strongest principles that we set out to do, mitigating assholes properly at the lower layers. We look to scale things too quickly and instead of providing what I think is a more fundamental requirement in order for self-sovereignty and privacy and censorship resistant, we're failing to do so now in the system that's actually being adopted by many people. So quick history of whisper and how we've moved from the Arlen Relay and kind of statuses and now back in Waku's attempts at trying to take infrastructure and grow it in the direction that it needs to grow while keeping all these principles aligned. Once again, there's a lot more details to all of this, I'd recommend reading the ridiculous amount of stuff we have online and watching Oster's talk previously. Here's one tomorrow where he gives him an awesome demo of what I'll get to eventually. Please show up for that. But this is what the original theorem announced to the world. This is what the decentralized stack looked like where you had smart contracts on Ethereum, we all kind of know what that is, and then swarm and whisper which were the other two pillars of what a decentralized application was supposed to be. Swarm for file storage and whisper for dynamic communications or ephemeral communications. And with a special emphasis in obfuscating the route of who sent a message and who's receiving it. We wholly bought into this status and we wanted to build an application. We wanted to be this thing that consumed these things so that people had access to it in resource constrained or areas in which only had access to small, low number of resources like the mobile phone, right? We wanted to increase the inclusion as much as possible while maintaining this high level of decentralization. And we had this concept of socioeconomic networks really, really, really early. You're seeing a lot of social token stuff happen and kind of the idea of that Ethereum is a coordination layer. We're moving past this financial application's only situation. And so we incorporated whisper. We used what was put out by Ethereum ecosystem. And we did it naively I'd say. We used proof of work because that's part of it in terms of the anti-spam mechanism for messages being passed on the network. And we killed batteries and hid super hot phones. We used gossip and bloom filters for sending it, for sending it in MIDI. And we destroyed data plans for people, right? We tried to use this technology as best we could. And it ran it. It was a new technology and we tried to apply it in a very extreme place. And then for discovery, finding peers and we had a base in the high churn of people's mobile devices popping in and out of places or turning them off and so on and so forth. It's really hard to get good message of reliability. And so what did we do? We took ownership of it because whisper wasn't being developed and applying resources. They, reasonably speaking, had a tremendous amount of work to be applied to just fixing and making the blockchain scale itself and solving those problems with what resources were available at that time. So Oscar wrote a blog here long then. We decided as an organization to take ownership of this and introduce Waku, which was a fork of whisper. And we tried to make it a little more scalable, a little more usable so that status applications, the users of the status application could have somewhat of a reasonable user experience. So we attempted to patch whisper for our extra environment, took responsibility and apply attention to our required infrastructure. And we did this in an open way. We created VAC, which is a separate organization to use focus and research and study on trying to grow these things and the manner that's appropriate with the principles I talked about in the beginning. And for all of these things, you can see out in the open. We tried to make open specifications that anyone can use. Whisper or Waku is for everyone. We're taking opinionated versions of it and applying it in context we think is appropriate. But it's for everyone. A public good is not to be owned by a given organization and used at our discretion. So we publish all these things at rfc.vac.dev, join, opine, contribute. But we had issues. We had spam. For those of you who have used status in the past couple of years and you go to maybe the status channel, you're going to see a lot of this. Why? Because we had no incentives. We took away our ability to mitigate some of that with proof of work because there are a lot of other issues with using proof of work for spam mitigation. But we had a problem. What could we have done to fix this? We could offer up a lot of centralized solutions by looking at IPs and banning them or running things through our servers and censoring things appropriately. But we tried to stick to our principles of what a real decentralized stack should look like. And it was painful. But we had a scaling problem, too. We had multiple problems simultaneously. So what did we do? We chose a plan for walkoo2 because the way whisper was built was fragile and couldn't scale and we realized that when we tried to fix it. So we wrote it completely from scratch off of the P2P and called it walkoov2. That's what we call it, the spiritual successor of whisper. So what do we do? We did a complete retooling of a private decentralized messaging stack on top of the P2P. It's modular so that users trying to use this stuff for a specific context can make the decisions appropriate for what they're doing. Whenever you try and make a completely generalized solid framework that isn't flexible, you're going to optimize for no one. So if you're able to provide a suite of protocols that work well together and a way for them to interact together, then you can hopefully build multiple solutions that optimize for multiple applications. And it's open, once again. It's built for generalized messaging. It's not just for status. That's one of the applications we use it for. We use it for new things based on what choices you make in the suite of protocols. Still spam, right? Still got problems with spam. We just dealt with scaling and getting people to use it and understanding that there's a lot of different ways you can do generalized chat and we're not going to tell you how to do it. So in order for keeping with our principles, we decided to build what's called the RLN Relay. And that is a privacy preserving spam protection that leverages zero knowledge proofs, sheer secret sharing, and some economic disincentives. This is built on Walk-A-V2 Relay. And this is, I'm not going to go through the overview. I don't have much time. But it's a really interesting way of using novel cryptography and very well-known cryptography and a unique combination of allowing people to contribute to a network and be removed if they do bad behavior without having to reveal a lot of PII information about themselves and so on and so forth. So we stuck to our principles. It was painful along the way but we got to a unique solution that I don't think ever would have gotten to or maybe much later had we just, you know, compromised those things and moved on to some other way. All the specifications once again are at rfc.vaq.dev. You can see the papers of these things. I'm sure there's links on vaq.dev but you can find them here. It's easier if you just go to the website and click on it as opposed to trying to write it down now. Go play with it. Tell us about it. We have a lot of different other research along the way. Once again go see Oscar. You can see a live demo of this type of thing. A video of a demo of this type of thing tomorrow. And once again wrapping up, so our principles are a priority, right? In order to do what I would consider public good infrastructure, you need to publish openly. You can't do things in the dark, especially when it's a digital permissionless system. You can't have trust in things if you don't know how it works and these things should be community based. So publish openly, implement, iterate. The concept of using old tools will lead to old solutions. If we would have compromised on our principles and done things to make status scale and kind of soar quickly, we would have just ended up with the same stuff, the same shit we have today. And that's not why we're here in the first place. Assholes are everywhere. And think about that. Think about how they can manipulate the intended messages that you're trying to send out in the world. Where can they understand the outer message of what you're trying to do, manipulate it and change that message such that the receiver either can't get it or get something wrong. And in general, I think it's important to finish on, we should be conforming technology to relationships we're trying to have with the people we're trying to have them and not the other way around. And we're building a lot of this. We've since expanded the organization substantially to start growing infrastructure. And we want to do that with as much of the principles and ethos that we started talking about in the beginning. And that's collectively built just for me to access specifically network level privacy. You can't have censorship resistance if people can see what you're doing. So as much privacy as possible with selective disclosure in the right places. Heterogeneous multi-chain network, you need the right to exit. You shouldn't be holding to a single place in which you shove things and then rely on them to work appropriately. Native private public smart contracts, once again privacy at all levels of the stack. And as much as we possibly can optimizing for resource restrictive devices because you can't be inclusive if people can't get access to the devices that are required to use your software or hardware or whatever. This is what we're doing. There's a lot of work to be done and we're hiring. This is a QR code to go to a lot of the jobs that we've posted currently. Keep track of it. There's going to be a lot more. Thank you. And since for the last ones, we have a lot of questions. Plenty of time for questions. I'm happy to take them. Yeah, take as much time as you want, Corey. We're looking... Are there any? I'm looking... There's a question over there. Hey, great talk. Allow me to stand. So I was curious, you talk about your principles and it seems like first and foremost privacy is at the top of that stack. I think there was like an implication about the morality of the OFAC sanctions like in the validators which seems to imply that privacy is paramount above the other principles that you try to present as like what is ethical for status. That's not a judgment. That's just... I'm just trying to say that. The question is, how do you prioritize the principles against each other? I mean, you talk about inclusivity like with the resource limited devices and then how do you, you know, ask your community to input their opinions about like the priorities of those principles. So in designing, you know, the future of status, like what sort of processes do you use to get feedback on those principles? Great question, actually. It's really, really, really hard. First off, I think it's that... An Osterstocki mentioned something that I like repeating and I like saying a lot because it really tells kind of how to... like a general idea of how to frame these things and like in the day, there's a lot of principles to uphold, right? A lot of ideas we want to try and do simultaneously. In some situations, in a lot of situations, come in conflict with each other. But you can't build a decentralized framework on top of a centralized foundation, but you can do the opposite really easily. Coinbase is a wonderful example of that. And so when you're thinking about having one of these arguments and trying to come into a priority, which is going to be context-specific, you need to try and see what's most important and what removes your ability to have that decentralized foundation that can be compromised with at a later... at a higher... at a later place. Because we don't want to make decisions for other people. So how can we make the most generalized way that people can make decisions for themselves and build appropriately for their context? So that means privacy is very strong because you're not revealing information until you need to. But if you're not private, you're automatically giving all that information away, which allows people to make decisions like... which doesn't allow people... You can't add... It's so hard much harder to add privacy later. The same thing. You can't build private solutions on top of public solutions. There's always going to be a way to remove that privacy to a later below and go after it, right? And I think like one of the examples I've heard recently is from our founder is like, say you wanted to build a limited liability doubt. How can we do that today? I don't think so. Because if you make a vote on a specific proposal and someone doesn't like it, they can still see who voted on it and the whales and go after those. So like there's no removal of risk in the process of contributing to something like that. Because let's go after you and you can see that the lower you go, the more often that happens. So like kind of long-winded answers to your question, you have to try and have arguments that move towards a direction of what constraints are we applying now that are going to have implications in the layers above and is that in line with the people's ability to make decisions for themselves? You mentioned you can't build private solutions on public infrastructure. I'm wondering what's your opinion on road ups that DK applied? They are beholden to whatever constraints that Ethereum gives them. They don't have a lot of power in themselves and I really don't enjoy like although zero knowledge is wonderful, it's currently being used for compression reasons and there's no privacy there. And once again, same thing, you're just even higher above the stack. Since you're publishing all that data to the blockchain in a non-privacy preserving way, it's not doing anything. It's just a scaling solution. So it's adding additional constraints that will eventually be subject to the same censorship resistance, like non-censorship resistance we're seeing today. But that being said, it's awesome. Like there's a bunch of really cool technology being deployed and very, very novel advancements and cryptographic primitives that will be useful for scaling and adding privacy preserving solutions in different ways in the future. So like, I'm happy we're able to start to serve the amount of people that starts to compare to where we set it out in the first place to like blockchain serving the world. But if we keep moving in this direction, it may become so ossified that we can't make solutions that keep it from being censorship resistant, which is what I said in the first because the whole point of all of this, if we build blockchains that are not censorship resistant, that we didn't do anything we set out to do other than making, I don't know, digital fund money. I'd like to give the biggest applause we have of the day because it's the last speaker. Corey, thank you so much.