 I'm Piper, as she said, and I'm here to talk to you guys about the portal network. So let's just get right into it. I work for the Ethereum Foundation. I got introduced. So this is about actually finally bringing lightweight, decentralized access to the protocol. And yes, this has been kind of a long-term project to get us to where we're at. There we go. So let's dive into kind of at this high level of what the portal network is. For me, it's a giant white whale that I've been hunting for a long time and hopefully doesn't eat me in the end. But the portal network is five new decentralized special purpose storage networks that serve all of the data that is necessary for interacting with the Ethereum protocol. And this has been a long road to get to here. We spent time way back trying to build lightweight clients on the existing networks. And where we ended up was that the existing networks don't give us what we need to actually deliver lightweight protocol access. And thus, we have these five special purpose storage networks that we are building out to serve this data and to essentially realize this dream of lightweight access to the protocol. The project has kind of these high level design goals that kind of informed what we needed to build and how it needed to get built. And one of the main things is that the portal network is really user focused. All of the clients that you hear about today, Gef, Beisu, Lighthouse, all of that stuff, it's infrastructure for the protocol. And those clients are built with the protocol in mind. And the user facing stuff, the JSON RPC API, they're not the lowest priority. But at the end of the day, those clients serve the protocol and not the users. One way to look at this is that if you want to interact with the Ethereum protocol today, you have two choices. It's the upper right or lower left-hand corner of this graph. You can run a full node. They're very heavy. And they're also awesomely decentralized. There's a number of choices for what to run. But in general, when we're talking execution layer, you're talking about very heavy pieces of software. We'll get into the details of this in a minute. On the other end of the spectrum, you've got lightweight but centralized options like Alchemy or Infura, things like that. Somebody at the last conference told me that I was a boomer for calling it Infura. I thought that was awesome. So anyways, you've got these lightweight options, but they're also centralized. And they can do things like correlating your IP address with the transactions you send or selling your data and things like that. There are two options at the far opposite ends of the spectrum. And we want to build this thing in the upper left-hand corner, this adorable pink smart car that is both lightweight and decentralized, which supposedly we care about. This brings us to this lightweight concept. Like I said, Ethereum clients are heavy. And we want that we need a network that allows lightweight devices to participate. Ethereum clients are heavy today because they have to do a lot of things. EVM, execution is CPU intensive. Running the transaction pool is CPU intensive. And there's gigabytes upon gigabytes of history and state data and things like that that they need to do. This means that running a traditional Ethereum client is an inherently heavy thing. And you generally can't do it on things like Raspberry Pis or phones. Over here on the left, we've got this nice little strong guy who can hold it all up. That's your traditional execution layer client. Our goal is building out a network that lets you spread things out that takes all of the load for all of this data and distributes it around all of the participants of the network in a nice kind of even way. The other thing that we focus on is kind of removing some of these height restrictions. And by height restrictions, I mean essentially hardware restrictions that keep you from joining the network. This is one of the things that blocked us from building a lightweight client years ago was that you've got these sort of, you must be this tall to ride things. You are not allowed as a participant in the devP2P network. That's the network that supports execution layer clients. You can't be part of this unless you have all of the state and all of the history and enough processing power to process every block and enough processing power to run the transaction mem pool. If you aren't tall enough, you're not allowed into that network. We focused on a different model. We wanted people to be able, people, clients, computers, whatever. We needed a network that allowed you to show up with whatever you had and to contribute it to the network if you're so willing. The idea is that all of these networks operate on this exact same principle, which is that as a client to the network, you can tune some parameters that dictate how much storage space the network is going to ask of you, that dictate how much processing power the network is going to ask from you. And one of the other major things that we focused on is kind of this UX thing, which means, to me, elimination of sync times. Traditional clients have bad UX in terms of the user-facing stuff, and that's because you either have to, when you start it up, you've got hours or days to wait for it to sync, and if you go offline for some period of time, you often have additional time to catch back up to the tip of the chain. These sync times make UX for end-user interactions, kind of basically unbearable. This is why almost all of the userland traffic goes through services like Alchemy or Infura. We've designed a system where all that you need to do is have a view of what you think, what you trust as the front of the chain, and from there, all of the data is accessible to you, which means the difference between hours or days of sync time from zero to being able to actually interact with the network, to being able to be interacting with the network in a matter of seconds or at the worst network conditions a minute or two as it has to appear with some other peers in the network. So really focusing on user-facing things like this. The other piece that we needed was something that was scalable, and we're not talking this scaling. This isn't starting scaling or transactions per second scaling. This is a number of network participants. This is having potentially millions of nodes be part of this network. Some of the past work towards Ethereum Lite clients is LES. That's the Lite Ethereum sub-protocol, and this has been around for five years maybe, and it has never really delivered on this goal. And the main reason here is that it exists in this client server architecture. LES nodes on the network are dependent on full nodes serving them to data. And what happens over time is that a full node who's serving this data ends up getting kind of just assaulted by all of these LES nodes, constantly asking them for information, and they're expensive requests that these nodes are making. And it hasn't turned out super well. LES has not delivered a reliable like Lite protocol access. And the main reason is this imbalance between client server stuff. That there's no incentive, there aren't incentives to run LES servers. Running an LES server just costs you something. And so the ones that are out there are being run by the goodness of hobbyists and other people's hearts, or people who misconfigured their client on accident or something, but either way, it costs you something. And the clients in this network are not able to contribute back. As an LES client, you are purely taking from the network and that is just the way the network is designed. In Fiora Alchemy, they're this centralized model, right? Their servers go down, everything stops working. LES is this decentralized model, which we like, but because there aren't enough incentives or anything for people to run LES servers, it hasn't worked out super well. We've moved all the way into this distributed model where we have a homogenous network where everybody in the network is a client and everybody in the network is a server. One way to think about this is very akin to BitTorrent where in LES, you have this kind of degenerative thing where the more nodes you add into the system, they take up a limited amount of capacity and once you exceed that, it degrades service for everybody. In the portal network context, we have built these networks around this idea that the more nodes you throw at it, the more powerful it gets. And that's kind of the whole core part of this. All right, let's look at a practical example of how you would serve this, essentially a balance inquiry from the portal network. I'll remind you, we've got a number of different networks here and the idea is that they're all sort of special purpose, partitioned off from each other. Clients can be part of any number of them that they want. This example is going to touch three of our networks. So what we're going to do here, it's a very simple example, we're looking up your ether balance. In a tradition, so this is our traditional client. You've got databases over here on the right where they're storing information. It's running this JSON RPC server. A request comes in to query my balance. The JSON RPC server is going to do a couple of things here. It's going to reach into an index to figure out what the client thinks the head of the chain is. It's going to look up the header for that header from whatever database it stores headers in. Once it gets that back, it can look at a field inside of that to see the state root and then it reaches into the state database to actually read your account balance. This all happens very quickly under the hood and the reason that a traditional Ethereum client can do this is because it is maintaining these databases, that it is constantly online and it is constantly keeping these things populated. The portal network concept is very similar, right? There's very little, right? This is like, oh God, no, wrong direction, too much. So in the portal network context, when this ethget balance request comes in, instead of reading from local databases, what a client's going to do is it's going to actually reach out into these networks that it's part of to get the data. We have a network for essentially tracking the head of the chain provides the beacon light protocol data. Your client's going to reach into there to know what the front of the chain looks like, which what the head of the chain is. It's going to use that to pull the header from the history network, which stores all of the historical block bodies, headers, receipts, things like that. And once it has that, it can look up what state root it's supposed to be looking up things under and it can reach into the state network to grab that state, look up your balance and return it to you. This is a very simplistic example, but this is very representative of what the majority of requests are going to look like. There'll be a little bit of sampling of data from different networks in order to get the information that you need before it's returned to the user. All right, where are we at? Like I said, this has been a long road to get here. We had to build some of the wrong things to figure out what the right thing to build was. And at this stage, we are past the research stage. We are purely in the get it to built and get it out the door stage. We have three different client implementations. This is fantastic. I'm so happy about this because we wanted to build a protocol, not a singular client. We wanted to build something that had many clients to it and instead of just one reference implementation, we've got trend written in Rust by my team at the Ethereum Foundation. We've got Ultralight written in JavaScript by the JavaScript team at the Ethereum Foundation and we've got Fluffy written by the Nimbus team run by status, and here we are. So here's our rough timeline right now. Software estimates are garbage. Imminently, we are right at the edge of getting our first network really fully up and running. That's the history network. In parallel to this, the merge has sort of kicked off just having this beacon light network and that's sort of like our next major priority and after that over the course of 2023, we are going to be getting the remaining networks online. The zero to one is a lot harder than building the subsequent networks that come after that. We've spent a lot of R&D getting to this stage where we almost have the history network up and running. That is what I've got for you today. If you want to get involved, we are findable on the internet just like Danny has often said. The doors are all wide open and unlocked. If this is a project you'd like to get involved in, please feel free to reach out to me. And I've got some time for questions if that's something we can do. I think we've got a guy with a mic walking around. I have a question related to the API endpoints which the portal network can sell. Will it be able to sell debug endpoints that need a whole archival stories on nodes? If I understand correctly, you're asking about the debug-spaced JSONRPC endpoints? Yeah. Not initially. All of the data to do things with those will in theory be present in the network but we are really focused on human driven wallet interactions that's kind of like our primary use case that we're focused on delivering and the debug endpoints just don't play a role in that. And in general are going to involve a much heavier level of requests and like data access than is traditionally involved in kind of standard wallet interaction. So no, the debug names-based endpoints are not officially supported. Thanks. From the perspective of an application layer client, is the thinking that an application would be like speaking directly to, like an application would want to run one of these like clients itself and speak to that or is the thinking that for some reason you would bundle the client itself into the application or neither of those things just trying to grok what the use case is? I think I understand the question. So there's a lot of ideas on the table and exactly which ones are going to stick and which ones aren't is right. We'll find out as time goes on. But the general idea was to build a network where the clients can be lightweight enough that you might have two or three of them actually running on your machine at any given time because in theory they're lightweight enough to actually embed. So if you're downloading a desktop wallet and the portal network's up and secure, up and live and production ready there's entirely likely that it might just bake the client right into the application that you're running and there might be two or three others running alongside of it. One of the things that my team's going to be focusing on is sort of like a system level process that's really easy to download and install that just runs the thing in the background which makes it easy for you to do things like connect MetaMask up to it or things like that. So I don't think that there's one model here. I think some applications might embed it. Some might treat it as an external dependency. Here, there we go. Yeah, so something that I'm familiar with the LES client that is that when you want to execute for like a smart contract call, you might need to do several round trips because you are basically every time that you need to query state, you are going to go to the full node and ask for something, right? And that maybe creates like, again, several round trips which increase the latency of like whatever you're trying to do. So is with the portal network, you're gonna have some solution for that like trying to kind of batch that? I think the question is because the requests are going out to the network, there's going to be some inherent latency overheads that come with that. Not that precisely, but that you are going to do multiple requests. So for example, if you want to do RC20 balance off, you need to download the smart contract, then you need to start executing and every time you need to access some part of like the state database, you're going to just go again and make a query. And so you are basically doing multiple requests. Multiple concurrent JSON RPC requests, that's the kind of concept here. Yeah, I think currently you do that sequentially and so basically you are increasing the total latency of like balance off, but you can find a way of like doing that concurrently or batch now that you only do a single request. I think that's going to be a thing that individual portal clients figure out on their own. There's nothing inherent about the networks that keep you from looking up large swaths of data in parallel at the same time. And anything that can be parallelized at that networking level will definitely benefit total amounts of latency that users experience. Where do you or do you see zero knowledge being used in an implementation of a like client for the portal network? Currently, it isn't part of any piece of our roadmap. It's not my expertise. So I don't know I think is the concise answer there. Thank you. Just have one quick question. What prevents me from running a like client and making it a freeloader that does no work? How do you prevent against? We don't. So there's two things that I'll say here. One is that we had to pick some cutoff points for what we were building because this is big. Like I took a lot of stabs at making lightweight protocol access in smaller ways and this is what came out of really trying a bunch of things and then not working. So in order to deliver this, we needed to build something that was much bigger than I originally thought we were going to have to build. And in that we had to like kind of cut it off at a point. The thing that we're building is attackable. There is attack surface that exists and the general idea here is that those are solvable problems and we're not going to focus on absolutely trying to make sure that we have them all solved on day zero. The portal network is not core infrastructure, not at the protocol level. The protocol as we know it does not depend on the portal network for anything and so if the portal network falls over, Ethereum does just fine. And initially it's possible that somebody attacks it and that probably means we're doing something that's working because it's worth attacking. So that's the one piece which is that we have built something that we know we are going to have to hone and fine tune and work on the security part of it. I have a second question that might be leading. I'm kind of also curious if I choose to make my late client act maliciously and you asked me for the balance. Oh, there we go, freeloaders. I got you, yeah. The network is designed to like, if there are too many freeloaders, it'll degrade performance for everybody. I'm relying on essentially two things which are the laziness of people. So people are inherently lazy and so going and configuring your client differently than how it ships is like something that people often won't do if it's working just fine. And if we ship it with sensible defaults that aren't running your fan speeds at full speed and aren't filling up your hard disk, the chances of you taking the time to go in and tweak those settings are pretty small. And there's a, and you can call it altruism or you can call it laziness but we're fundamentally built on this idea that these small contributions of lots and lots of people add up to it a lot, right? BitTorrent works. You can download things pretty fast on it because there's a lot of people doing it and most of them aren't screwing with their settings too much to just leech from the network. So, can we get the mic over? Sorry, yes, I'll get you next. Yeah, just a small question. You mentioned the beacon light client network as well. So what sort of data are you planning to distribute in the beacon data line network that you're talking about? There's, I believe, three data types that we'll be working with. I am going to potentially botch this if I list it off the top of my head. Kim on the Fluffy team is the one kind of leading the like R&D or the specification for what that network serves. But it's the light client update objects and there's like one or two more but essentially it's the minimal objects that we need to be able to do the beacon light protocol to jump to the head of the beacon chain. I can't hear you, I'll repeat your question. I was just wondering when you're testing on a day to day what user flows are you using for your tests? Well, so we're designing at the JSON RPC API level. So we're not necessarily building interfaces for people. We're taking the JSON RPC API which is like the standard API that execution layer clients expose to users, right? This is what Alchemy generally exposes. These are the things that MetaMask is like calling to. And so while we are user focused we are building out clients that can serve the JSON RPC API which is still a low level thing, right? You're still talking about computers, talking to computers. The wallets and things that get built on top of this that's I think the type of user testing you're potentially asking about and that's kind of like outside of our purview. I guess you could say that the wallets are primary clients and that the users are the wallets' clients. Two questions. Are we talking about one-to-one JSON RPC compatibility? And the second question is what does the roadmap look like for L2s because light clients are inherently very useful for cheap operations and that's kind of where stuff is going. So that's it. I think you asked whether we're building out a like one-to-one to the JSON RPC API. We are not trying to redefine the JSON RPC API. It is established and has been successful and is generally the backbone of any kind of Ethereum interactions that wallets and things are making. And so we are building off of the existing standard. I don't have an answer for you on the L2 thing. It's an open question. We'll see how it goes. And that is all of my time. Thank you all.