 So today, I'm going to talk about building privacy-protecting infrastructure, what it is, why we need it, and how we can build it. We look at VACU, which is a communication layer for web free, and also look at how we're using CK technology to incentivize and protect the VACU network. We also look at Syrokit, which is a library we've been building to make it easier to use signals proofs in different environments. And at the end of this talk, I'll hope you'll come away with a better understanding of the importance of privacy-protecting infrastructure and how we can build it. So first briefly about the VACU and me. I'm the director of research at VACU, and we build public good protocols for the decentralized web with a focus on privacy and communication. We do applied research based on which we build protocols, libraries, and publications. And we also do custodians of protocols that reflect the set of principles. It has its origins in the status app and basically trying to improve on the underlying protocols and infrastructure. We build VACU, among other things. So privacy is the power to selectively reveal yourself. It's requirement for freedom and self-determination. Just like you need decentralization in order to get sensory persistence, you need privacy to enable freedom of expression. To build applications that are decentralized and privacy protecting, you need the base layer, the infrastructure itself, to have these properties. We see this a lot. It's easy to make trade-offs at the application layer than doing them at the base layer. You can have custodial solutions on top of a decentralized and non-custodial network where participants own their own keys, but you can't really do it opposite. And if you think about it, even something as basic as buildings can be seen as a form of privacy-protecting infrastructure is completely normal and obvious in many ways. But when it comes to the digital realm and mental models and way of speaking about it, it hasn't sort of caught up yet for most people. I'm not going to go into too much detail about the need for privacy or what happens if we don't have it, but suffice to say it's an important property for an open society. When we have conversations, through peer-to-peer offline conversations, we can talk privately, and if we use cash to buy things, we can do commerce privately. On the internet, great as it is, there are a lot of forces that makes it the natural state of things, not the default. Big tech has turned users into commodity, a product, and monetized users' attention for advertising. To optimize for your attention, they need to sell your habits and activities and hence breach your privacy, as opposed to more traditional business models where you're buying a product and incentives are maybe more aligned. And to sort of build these, sensor-persistent and privacy-protecting and enable these types of apps that enable freedom of expression, we need credible, basically, infrastructure that have these properties. So infrastructure is what lies underneath. There's many different ways of looking at it, but I'll keep it simple, as per the original web-free vision. So you add a theorem for computing consensus, swarm for storage, and whisper for messaging. And Vaku has taken over the mantle from Whisper and is a lot more usable today than Whisper ever was for many reasons. On the privacy front, we see how your theorem is struggling. It's a big UX problem, especially when you try to add privacy back on top. It takes a lot of effort, and it's a lot easier to censor. And we see this with recent actions around tuning to cache. Compare this with something like Ccash or Monero, where privacy is there by default. There are also problems when it comes to the pay-to-pay network inside of things, for example, with Ethereum-validated privacy and hostile actors and jurisdictions. If someone can easily find out where a certain validator is physically located, that's a problem in many parts of the world. Being able to have stronger privacy protection guarantees will be very useful for high-value targets. And this doesn't even begin to touch on so-called dApps that make a lot of sacrifices in how they function, from the way how domain works, to how websites are hosted, and the reliance on centralized services for communication. We see this time and time again where centralized single points of failure, they work for a while, but then eventually fail. And in many cases, an individual user, they might not care enough, and for platforms, the lure to take shortcuts is strong. And this is why it's important to be principled, but also pragmatic in terms of the trade-offs that you allow on top. And we'll touch more on this when it comes to design goals and around modularity that Evacue has. So Synod Proofs are a wonderful new tool, and just like smart contracts enable programmable money, Synod Proofs allows us to express fundamentally new things. And in line with the great tradition of trust minimization, you can prove statements while revealing the absolute minimum information necessary. And this fits the definition of privacy, the power to selectively reveal yourself perfectly. And I'm sorry I don't need to tell anyone in this room, but this is truly revolutionary. And technology is advancing extremely fast, and it's often our imagination is limit. So what is Evacue? It's a set of modular protocols that compete to be a communication, has a focus on privacy and security, and being able to run anywhere. And it's a spiritual successor to Whisper. And by modular, we mean that you can pick and choose protocols and how you use them, depending on constraints and trade-offs. So for example, bandwidth users versus privacy. It's designed to work in resourcistic environments, such as mobile phones and web browsers. And it's important that infrastructure meets users where they are and support their real-world use cases. Just like you don't need your own army and a castle to have a private bathroom, you shouldn't need to have a powerful, always-on node to get reasonable privacy and sense of persistence. We might call this self-solventy. One way of looking at Evacue is it's an open service network. There are nodes with varying degrees of capabilities and requirements. For example, when it comes to things like bandwidth usage, storage, uptime, privacy requirements, latency requirements, and connectivity restrictions. We have this concept of adaptive nodes that can run a variety of protocols. And a node operator can choose which protocols they want to run. Naturally, there will be some nodes that do more consumption and other nodes that do more provisioning. And this gives rise to the idea of a service network where services are provided for and consumed. So there are many different protocols that interact. Evacue really protocol is based on libpdb.gov.sub for pdbmessaging. And we have things like filter for bandwidth-resricted nodes to receive a subset of messages. Light push for nodes with short connection windows to push messages into the network. Store for nodes that want to retrieve historical messages. And then on the payload layer, we also have things like support for noise, hand shakes, and key exchange. And this means that as developers, you can get end-to-end encryption and expected guarantees out of the box. We have support for setting up secure channels from scratch and all of this paves the way for providing things like signals double-ratch that a protocol level much easier. We also have experimental support for multi-device usage. It's worth noting that similar features have existed in, for example, the status app for a while. But with this, it makes it easier for any platform using Evacue to use this. There are a lot of other protocols as well related to things like peer discovery, topic use, and so on, and you can check out our specs for more. So when it comes to the Evacue network, there are a few sort of immediate problems. For example, when it comes to network spam and incentivizing service nodes, we want to address these while keeping the privacy guarantees of the base layer. I'm going to go into both of these. So the spam problem arises on the gossip layer when anyone can overwhelm the network with messages. The service optimization is a problem when nodes don't directly benefit from the provisioning of a certain service. This can happen if they are, for example, not using the protocol directly themselves as part of normal operation, or if they aren't socially inclined to provide a certain service. And this depends a lot on how an individual platform choose to use the network. So since the peer-to-peer relay network is open to anyone, there's a problem with spam. And if we look at some existing solutions for dealing with spam and traditional messaging system, a lot of entities like Google, Facebook, Twitter, Telegram, Discord, they use phone number verification. And while this is largely civil-resistant, it is centralized and not private at all. Historically, Wisp use-proof of work, which isn't good for heterogeneous networks, and things like peer-scoring are open to civil attacks and doesn't directly address spam protection in an anonymous peer-to-peer network. And the key idea here is to use RLN for private economic spam protection using CK Snarks. I'm not going to go into too much detail of RLN here. We have some bright apps on back.dev by SNAS, who's been pushing a lot of this from our side. And I believe there's also another talk by Taylor at PC tomorrow afternoon, who's going to go into RLN in more detail. But I'll briefly go over what it is. It's sort of the interface and circuits and talk about how it's used in vacuum. So RLN stands for rate-limiting nullifier. It's an anonymous rate-limiting mechanism based on CK Snarks. And by rate-limiting, we mean that you can only send N messages in a given period. And by anonymity, we mean that you can't link messages to a publisher. We can think of it kind of as a voting booth, where you're only allowed to vote once every election. It can be used for spam protection and peer-to-peer messaging systems, and also rate-limiting in general, such as for a decentralized capture. There are basically three parts to it. So you register somewhere, then you can signal and find there's a verification and slashing phase. You put some capital at risk, and this can either be economic or social. And if you double-signal, you get slashed. So here's what the private and public input to the circuit looks like. The identity secret here is generated locally, and we create an identity commitment that is inserted into a Merkle tree. And we use Merkle proofs to prove membership. Registered members can only single once for a given epoch or external nullifier, for example, every 10 seconds in Unix time. And RLAN identifier is for a specific RLAN app. We also see what the circuit output looks like, and this is calculated locally. So Y here is the share of a secret equation, and the internal nullifier acts as a kind of unique fingerprint for a given app user epoch combination. How do we calculate this Y and internal nullifier? This is done using Shamir's secret sharing. Shamir's secret sharing is based on the idea of splitting a secret into shares, and this is how we enable slashing of funds. In this case, we have two shares. If a given identity A0 signals twice in an epoch external nullifier, A1 is the same. And then for a given RLAN app, internal nullifier also stays the same. X is the single hash, which is going to be different. And Y is public output, so we can reconstruct the identity secret. With identity secret revealed, this gives access to, for example, financial stake. So this is how RLAN is used with the relay gossip protocol. So node registers and locks up funds, and after that, it can send messages. It publishes a message containing the share knowledge proof and some other details. Each relay and node listens to the membership contract for new members, and also keeps track of relevant metadata and the Merkle tree. And metadata is needed to be able to detect double signaling and perform slashing. Before forwarding a message, it does some verification checks to ensure that there are no duplicate messages, that the share knowledge proof is valid, and that no double signaling has occurred. It's worth noting that this can be combined with traditional peer scoring, for example, things like duplicate messages or invalid CK proofs. In line of Vacus' goal of modularity, RLAN really is applied on a specific subset of PubSub and content topics, so you can think of it as kind of an extra secure channel. So where are we at with RLAN relay deployment? We've recently launched our second testnet, and this is using RLAN relay with smart content on Gully. It's integrated with our example P2P chat app, and it does so for three different clients, Nevaku, Govaku, and JSVaku for browsers. And this is our first P2P cross client testnet for RLAN relay. Here's a screenshot of a short demo. In the interest of time, I'm not gonna do the demo here. I'll actually do it tomorrow afternoon. But basically, this shows a user registering in a browser and then signaling through JSVaku, and it then gets relayed to a VACU node that verifies the proof. And when more than one message is sent to a given epoch, it detects a spam and discards it. Slashing hasn't been implemented fully yet in the client and it's work in progress. If you're curious and wanna participate, you can join the effort on our VAC Discord. We also have tutorials set up for all the clients that you can play around with, and that's the QR code right there. As part of this, and in order to make this work in multiple different environments, we also been developing a new library called Syrokit. I'll talk about this a bit later. So going back to the idea of the service network, let's talk about service credentials. So the idea behind service credentials and private settlement is to enable two actors to pay for and provide services without compromising their privacy. We do not want the payment to create a direct public link between the service provider and the requester. We call it the sort of the vacuum service network illustration with adaptive nodes where nodes sort of choose which protocols to run. And a lot of these aren't very heavy and they sort of enable them work by default. So for example, they're really protocol. Other protocols are much heavier to provide, such as for example, storing historical messages. It's desirable to have additional incentives for this, especially for platforms that aren't community-based where some level of altruism can be assumed. For example, status communities or wallet-connect cloud infrastructure. So you basically have a node Alice that's often offline and it wants to consume historical messages on some specific content topic. And you have another node, Bob, that runs a server at home where they want to store historical messages for the last several weeks. Bob is happy to provide this service for free because he's excited about running privacy-protecting infrastructure but he's using it and he's using it himself but his node is getting overwhelmed by free-loaders and he feels like he should pay something to continue to provide the service. So Alice deposited some funds in a smart contract which registers it in a tree similar to certain other privacy element mechanisms. A fee is taken or burned. In exchange, she gets a set of service credentials and once she wants to do query with some criteria she sends this to Bob. Bob responds with size of response, cost and receive address and then Alice then sends a proof of delegation of a service token with a payment. As a payment. Bob verifies the proof and results the query. The end result is that Alice has consumed some service from Bob and Bob has received payment from this. There's no direct transaction link between Alice and Bob and gas fees can be minimized by extending the period before settling on chain. This can be complimented with altruistic service provisioning for example by splitting the peer pool into two slots or only providing a few cheap queries for free. The service provisioning is general and can be general for any kind of request response provisioning that we want to keep private. This isn't the perfect solution by any means but it's an incremental improvement on top of the status quo and it can be augmented with more advanced techniques such as better non-reputable node reputation, proof of correct service provisioning, et cetera. And there's a lot more details to this and we are basically currently in a raw spec proof concept stage of this and we expect to launch a test in this later this year or early next. So SeruKit is a set of Serenade modules written in Rust and designed to be used in many different environments. The initial goal is to get the best of both worlds with circum-solidity and JavaScript on one hand and the Rust ecosystem on the other. And this enables people to leverage circum-based constructs from non-JavaScript environments. For the RLN module, it is using circum circuits via Arc Circum and Rust for scaffolding. It exposes a CFFI API that can be used for other system programming environments like NIM and GO. It also exposes an experimental Wassam API that can be used through web browsers. And that could be the infrastructure running in many different environments such as NIM, JS, GO, and Rust and mobile phones and web browsers. So this is a requirement for us. Circum and JS Strings is access to things like DAP developers, tooling, generating verification codes, circuits, and Rust Strings is that it's system-based and easy to interface with other language runtimes like NIM, GO, Rust C. It also gives access to other Rust C ecosystems such as Arcworks, which opens the door for using other constructs like Halo 2. This becomes especially relevant for concepts where you might not want to do a sort of a trusted setup or where circuits are more complex or custom and performance requirements are higher and so on. In general with Circuit, we want to make it easy to build and use Synology Proofs in a multitude of environments such as mobile phones and web browsers. Currently, it's too complex to write privacy-protecting infrastructure with Synology Proofs, considering all the languages and tools you have to learn, all the way from JavaScript, Solidity, and Circum and how to write the circuits to Rust, Wassam, FFI, and that's not even touching on things like secure key storage or mobile depth. Luckily, more and more products are working on this, including various DSLs. And it would also be exciting if we can sort of make a useful tool stack for JS less CK devs to sort of reduce cognitive overhead similar to what we have with something like Foundry. I also want to mention a few other things we're doing. One thing is protocol specifications and we think this is very important for peer-to-peer infra and we see a lot of other products that claim to be doing peer-to-peer infrastructure but they aren't very clear about guarantees or are stable something is or the actual semantics of it. This makes it hard to have multiple implementations to collaborate across products and to analyze things objectively. Related to this is publishing papers. We put out free so far, related to VACU and all and really. And this makes it easier to interface with academia and there's a lot of good reasons out there and we want to build a better bridge between academia and industry. Another thing is network privacy. So VACU is modular with respect to privacy entities and there's lots of knobs to turn here depending on specific deployments and requirements. For example, if you're running the full really protocol you currently have much stronger receive anonymity than if you're running a filter protocol from a bandwidth or connectivity restricted node. We aim to make this plugable depending on specific user needs. So for example, mixness such as NIMM they come with some trade-offs but are definitely a very useful tool in this arsenal. A good mental model to keep in mind here is the anonymity trail Emma where you can only pick two out of three when it comes to low latency, low bandwidth usage and strong anonymity. We're currently exploring things like Dandelion-like additions to the really gossips of protocol which would provide for stronger send anonymity especially in a multi-node botnet attacker model. As part of this we're looking into things like different parameter choices and more generally possibilities for low latency usage. This could make it more amenable for latency sensitive environments such as validated privacy especially under very specific threat models. In general, FEMA is that we want to be rigorous about the guarantees we provide under what conditions and for what precise threat models. And we have a lot of specs around this and adversary models and so on. Another thing that I mentioned earlier is this noise payload encryption and specifically things like allowing for pairing different devices with QR codes. And that's sort of, it's very useful for developers because we live in a multi-device world and we want to use things from different devices. As a summary, we're going over what privacy protecting infrastructure is, why we want it and how we can build it. We've seen our CK is a fundamental building block for this. We looked at VACU, the communication labor web-free and how it uses Synology Proofs to stay private and function better. We also looked at CIRCIT and how we can make it easier to do Synology Proofs in different environments. Finally, we also looked at some other reasons we've been doing and all of the things mentioned in this talk and a lot more is available as write-ups, specifications or discussions on our forum or GitHub. Here's some links. If you find any of this exciting to work on, feel free to come up to talk to me. We're also hiring and have started expanding into other privacy infrastructure technology such as private and provable computation with CK Wasm. And the QR code is a link to our Discord. Thanks, any questions? Hello. One of the main challenge I perceive in crypto messaging is the fact that producers are incentivized to pay to have a massage broadcasted, but there is no incentive then for relayers to distribute it to consumers that much. And this makes centralized solution usually the outcome as you want to have something like, I don't know, MetaMask to be able to relay things to users with Intra and everything, which make it in return a censorship sensitive where then the nation state can pressure these centralized actors into not distributing the message. And if you're relaying only on peer-to-peer, you might still need then some very old-fashioned dexer layer which can also be centralized to, hey, this message is here. So how do you specifically in your protocols solve this, making sure that there is incentive for the distributor to receive a message? So it repeats the question. So if you're in a decentralized messaging protocol, you have consumers that want to receive a message and as you put it yourself, sometimes they don't really have a lot of resource, they don't necessarily want to verify their identity and everything, but there is no direct incentive for broadcasters to have the private message that these consumers want to send this message to the consumers. They're not getting paid for it. Yes, so I think we're splitting this problem up a bit. It's definitely a general problem if you have some multi-hop model, if you want to keep that private in a centralized set and in general that leads to very heavy models. The way we are basically simplifying the problem and cutting it up is looking empirically at the problems we see today and a lot of those problems, as you kind of mentioned with Metalskin, things like that, come exactly when you are in a restricted environment and you have to use some kind of service, some request-to-spawn kind of thing there and this is exactly what we're trying to do with the things like the service credentials and these things and there's also a lot of things going on when it comes to discovery and finding and ranking these types of nodes but when we look at the relay network, just like if you look at GossipSub, usually the participants in the network are incentivized to sort of relay messages and furthermore, because of four animated reasons, we are encouraging the use of a single PubSub topic which means that you're relaying messages in a network in general because you don't necessarily know the contents of it. So it's kind of a by-product of normal operation. So it depends a bit on what layer you look at it, if it's sort of the normal operation or if you're looking at the edge kind of thing and then the reputation thing is another aspect of it. So if there is no direct incentive in the network, it's just a side effect of producers wanting more users. Kind of, like there's these levels of it, right? There's different types of incentives and the relay network itself, empirically it's not the big problem right now. It is definitely a resource area and it's not something that we are looking into and we have some ideas around it but it's not actually a high priority if you look at empirically the problems we're having. The problems we're having are more around these kind of edges where there's service provisioning as well as how you discover things and web browsers and reputable nodes and these types of things. So that's higher priority and empirically but things might evolve and yeah. Do we have any more questions? I have two questions. First is all based on semaphore. Second question is, is there any gas payment involved in zero knowledge verification if there is, how do you prevent from exposing your own chain identity from gas payment? Yes, that's a good question. So all in is, you can look at it as kind of semaphore but then more things when it comes to actually preventing double signaling. So semaphore plus some other stuff to do that then do the slashing and so on. As for the CK, like CK verification this is actually something that's a bit different with this peer-to-peer infrastructure because it's not happening normally on the blockchain. It's actually you're verifying it on each peer-to-peer node sort of that's running this thing. So that doesn't require gas usage. When it comes to slashing, so if you've detected the double signaling then you have to go on chain obviously and interact that and that will also verify the proof but you can use the relays and these types of things. But as part of normal operation you're not actually interacting with the blockchain. It's all happening peer-to-peer. Unfortunately time is up. But thank you Oscar. Again, I'm going to repeat the fact that you can go talk to Oscar outside of the room so please pull up on your conversation there. So again, thank you.