 I've been hoping to talk about this for a long time. This is a really like a long-term bit of work for a lot of us because this lab in general has been pioneering a ton of like amazing contributions to the ecosystem in terms of scaling in a bunch of different ways. And this one I think is gonna be one of the greatest contributions that this lab is going to make to the whole blockchain space and to, I think beyond that, this might actually be a protocol that ends up coming to sort of the mainstream and data centers and so on outside of a blockchain context because it gives you Byzantine fault tolerance in a very nice scalable setting and it has very nice properties for a lot of classic distributed systems applications. And so you can think of doing cluster management and large-scale computational arrangements and so on with much stronger security than traditional consensus protocols. So first of all, I wanna talk about something I call the interplanetary principle. This was something that the AppFest community came up with in 2014-15 after we wrote the initial paper. So we never made it in there, but we should write a paper about this. So first off, like Web3 needs to become web-scaled. Today, Web3 is not at all web-scaled when you think about the traditional Web2 workloads. You require massive scale of bandwidth utilization and massive scale transactional throughputs and so on. So this is partly why Bitcoin and Ethereum and so on were sort of disregarded by the traditional cloud people because it just kind of seemed crazy that a transactional system that could do only a fraction of what your phone could do was gonna run the entire monetary system and so on. And so that's why it was sort of disregarded. However, they sort of missed the point and the point is to get into this much better model of permissionless, large-scale, Byzantine file tolerant networks that have an economic construction within them. It's a much stronger way of building distributed systems and applications, but we need to make them scale. We need to use the learnings from the distributed systems world to scale these things. So that's our target. We really mean to be able to run all of those kinds of things over these systems. And I know that sounds crazy, but you'll see, like you'll see over the next few years. Now, there's this consensus bottleneck where you're trying to push in tons of amounts of transactions, everything is getting bottlenecked, so we wanna be able to scale it. I think everyone here is pretty familiar with that. Let's talk about the internet for a moment. So one of the key things here is that the internet is partition, partitions happen all the time and so your systems have to be partition tolerant. Today, applications and blockchains are not at all partition tolerant. If you lose connectivity to the mothership in a sense, you're host, things don't work well. However, the internet requires partition tolerance because things will break apart. Like it's a huge grapevine with lots of different pieces and so on and things come up and down all the time. So you cannot run a mission critical application at all touching a blockchain because they're not partition tolerant. Now, where, so think of like this set of properties, I think you already saw a bunch of these. This is kind of what we're shooting for, like billions of transactions per second or trillions, of course, paralyzed across a large scale with fast local finality. So I'm pretty serious about these numbers. Sure, you have the root level consensus in Earth scale. That could be three to five seconds or something like that. That's kind of like pushing it. Maybe you could get faster, but then you wanna be able to drop down to city level or region level in like 100 milliseconds or lower and then get to around one millisecond inside of a data center should be doable. One, two, five milliseconds. And that is something that I think these networks will be able to do. It's a bunch of work to get there, but with the subnet structure, you can do that. And so what happens when you are able to have the traditional security properties of a blockchain and smart contracts when you're running at millisecond level loops? Like that's what we're talking about. And that's what the promise and potential of this. And also think of the throughput, like millisecond loops plus trillions of transactions per second. We're talking about a massive scale operation, right? We also want all of this to be safe in the traditional blockchain ways against nation-state attackers. We need, there's a bunch of parameters that need to be tunable for applications because different applications have different requirements. We want horizontal scalability of being able to meet demand over time. Like I said, new applications appear. And you need to also be able to deal with different nations having different policy structures or different applications having different policy structures. You wanna be able to evolve the network in pieces without forcing the rest of the world to conform. You of course then want other traditional things that the blockchain world wants like encryption transactions and so on. So that's some of the thinking that we had. So now IPC is a hierarchy consensus which is hierarchy consensus is more broad idea which is saying let's take consensus protocols and scale them by organizing them into a structure where you couple them and you derive some proper, like the children derive some properties from the parent. This doesn't have to be a tree. Though I have not read any paper about this. You could probably construct like DAGs here so that it doesn't actually necessarily need to be a tree. And there's probably some extension to this where you don't even have directions here. You could have like mesh, meshes of consensus protocols. And then that lands you closer to like the, this kind of like peering structure. But that's I think way harder to get working right. So the hierarchy really helps in traditional distributed systems way. Hierarchy makes things a lot simpler. So IPC is gonna do this kind of thing but it's gonna do it in a way that like couples with the blockchain model of enhancing security of the children based on the security of the parent. And you wanna be able to move around assets and so you want like very strong guarantees there. So like if you were doing something simpler, you could have a much simpler protocol if you were just kind of like trying to do like, you know, log machine replication or something like that or having eventually consistent structures and whatnot. Like if you wanna be able to like have hard security, that's a much harder problem. Now let's talk about the interplanetary principle. It's something that the IPFS community came up with and the idea is like, you know, if you remember the end to end principle that says that inside networks, you want to keep things dumb and stateless. The end points have to do all the work because there's kind of like this negative result where like you can't possibly, even if the network is really, really smart, you can do everything there. So there's kind of an end to end principle of saying you have to do the, keep the smarts at the edges always. So you might as well make things really simple in the beginning, in the middle. So in kind of this traditional networking style of having these principles for design, the interplanetary principle points out that when you deal with like large scale delays and this is why it's interplanetary because for us humans it's really easy to think about minutes and seconds. We don't have a good appreciation for milliseconds, microseconds and nanoseconds. Our computers operate with very different degrees there but we just have no intuition for it. So it's much easier to think about this in terms of planetary level delays. Imagine that you were trying to load a webpage from Mars and like every single time you clicked you have to wait, you know, four to 24 minutes depending on where are you in the world. Sorry, I enter in principle. Imagine that you had to wait four to 24 minutes before you got any, you know, the click got any feedback and so you had to like send something out and wait a while before it came back. Instead you should be building systems to be delay tolerant and to be able to move around the content and the structures to locally and so pay the expensive cost once, move all the state that you possibly need and then compute locally and try to minimize the use of links like this. Now if you take the interplanetary principle and shrink it down to this room and the cloud and so on and the computer and these computers and the cache and RAM and so on, you end up with the exact same ideas and by the way with bigger orders of magnitude, the orders of magnitude here are bigger than like the planet to Earth and so on, the differences but the principle stays which is you wanna be able to minimize the round trips between all kinds of communications. So, you know, if here we were the traditional, the reason this came up from IPFS is that if you are trying to load a webpage or have a chat application and so on, if all of us in this room were editing a Google Doc together or trying to send some slides from one computer to another, it would be really great to go from this physical device here to that device over there, type, type, type and so on without ever having to go to the cloud and dealing with kind of the potential partitions, delays and so on. Imagine editing a Google Doc with 24 minute delays, right? Like you'd go crazy. So, the interplanet, so design distributed applications with tolerance for interplanetary delays, if you do that, if you force yourself to account for that and be extremely careful with where you introduce delays, you produce much better designs, you produce partition tolerant designs that will behave dramatically better in Earth's internet because Earth's internet has these planetary scale delays from the perspective of computers, not from the perspective of most humans. Now, not all humans, there are many humans that are behind extremely slow networks for whom these delays are awful. Like you have page loads from certain parts of the internet especially in really poor connectivity regions where you're talking about like 30 second to minute long delays before like a page will load. Just go to super crowded conference place and so on and try to load a page, it's like super, super slow. Now, if you do this right, then this will lead to local first applications that will sync to the cloud second. And this is the same kind of principle that we're using in IPC. This will yield support for mission critical applications. You'll be able to do mission critical here means something where like you can run an emergency service over like messaging that is reliable and robust and will keep operating even if something else in the world falls apart. And it will also fit mobile and so on and poor connectivity regions much better. So design with interplanetary principle. But by the way, we do our series about this and we are going into these places. So think of like these kinds of networks and have a report that this is how we can actually go to the moon. You can actually have the first blockchain systems that can actually go to the moon by leaning on these kinds of systems. So think of something like, yeah, sorry, flipping slides in different places. So we are serious about going into these places. So you can actually go to the moon. So imagine an environment like this but think of like all these kinds of nodes and so on and they should be computing locally and you should be able to store and retrieve files there and never have to clear our transaction over to Earth, right? Like that, it was sort of crazy to have to go to that. So let's go talk about applications. So I'm gonna elide the entire design because you just heard about that and like that was the entire presentation of how do we build this? I just kind of wanted to give you kind of the intuition for the design and why the design works that way so that you can start thinking about the applications. So, you know, traditional consensus was first off, large scale web three applications can be made possible through that transactional throughput that I was describing. If you can do that at those scales with those latencies you can actually start building these kinds of things. You can also start building much better bridges and composability to other networks because you can create subnets that operate with the co-locate with all those other networks and can move state between them without having to deal with potentially extremely expensive and difficult cross L1 environments. So you could have a much faster smaller chain that couples to one, you move state into that one then you go over to the other one and you move state to the other place as you can have those kinds of connectivity and bridges. Now that would require linking to something else not as a parent but as some other kind of property. So now there's a bunch of traditional consensus applications like cluster management and so on that I was describing that you can now do in a Byzantine setting out of using IPC. So think of networks like this that are really massive and so on. Now imagine there's a bunch of different computers run by different people or applications run by different people. You wanna be able to transact really fast. Think of high frequency trading and stuff like that. You can now do that in this kind of setting with hard consensus guarantees. I claim that you could do traditional back ends in a web three sort of sense with optimistic or zero knowledge proof based verifiability, which might seek a verifiability to be a bit expensive but you make it up in the scale out. So by being able to find out and have so many computers you can pay off a few orders of magnitude in running a traditional standard web app back end in this zero knowledge setting. And you then get the ability to kind of run tons of these because you don't have to compute in all of them. And the way you do that is you run subnets of these. So you can also do these kind of regional blockchain applications, so think about money. You wanna be able to spend money in different places. You wanna be able to use a financial contract in a particular environment. You wanna be able to kind of travel to localities and you want all of that to work whether the internet is operating or not. If you have ever tried to pay for food with Bitcoin you will experience an amazing thing called the block time delay. And it will be there trying to pay for your thing for 10, 20, potentially 50 minutes if the other person doesn't trust you and has read the papers. But in reality, most people actually just see that transaction going through and so on and it's super, super slow. Now if you instead compare that to a transaction from a mobile phone, you tap it and you're done and you leave. That's what we need. We need to get to that level of operation. But in order to do that you need very high throughput chains with local clearing times. And so that should be able to operate in that locality. And by the way you should be able to work on a plane. So you should be able to be in a plane and have that kind of be disconnected from the rest of the world and have that type of transaction happen without you needing requiring connectivity and so on. So suddenly all of this smart contract potential becomes possible in those environments. And by the way, one of the things that has held back a lot of the financial instrument adoption from blockchains is that you end up in this setting where it's really useful for a lot of environments but the kind of like more modern economies don't need them as much because the financial infrastructure works pretty well but the countries where the financial infrastructure sucks have terrible connectivity. And then but they can't use blockchains because you just can't even load the web pages let alone try to do a transaction for 30 minutes. And so potentially these things could be extremely useful to be able to do local connectivity settings and build a better financial stack in those places. So you could do things like the remittances use cases and the local money and mobile money and so on much better. So yeah, I already talked about partitioned dollar and mission critical applications. One interesting one for that you might not know about is that five points is building a CDN and the way that we're doing it is that we're thinking about regions. We're gonna be separating out a lot of nodes and so on assigning regions and think of those as subnets. So think of those structures as being able to map to a specific subnet and that subnet is in a particular region of the world and you're able to kind of assign nodes to those environments and be able to transact from there to the peers. So you think of this kind of structure here where you have like a bunch of retrieval clients somewhere in the world, they want very low latency retrieval, they interact directly with some node, that of course doesn't go over the chain but the cluster management of what are those nodes that are serving that data and what data should they store, all of that coordination, all of that traditional cluster management can be done in a subnet of IPC in that region and it becomes partition tolerant. You no longer have to interact with the rest of the world. The connectivity to the rest of the world falls apart? No problem, you can sort of load your webpage. You can sort of like find out how to go to like a hospital because like the page is still load. Another one I think you heard about already today is computer over data where we're building out a set of networks around different types of primitives to achieve verifiability and privacy will give you different decentralized compute networks and these could all be L2 networks that's sort of coupled to a couple of thought and a couple to some other network and IPC can be an extremely useful way to be able to interface between all of these because it gives you a really nice way of like bridging and moving around state between these even if you sort of build them separately. And the reason why you build these separately is because this triangle is like a really punishing triangle if you've ever tried to think about how do you do, try to do like what Falkman does but like try to do it for just general computation like LAN does or EC2 or something like that. Actually trying to do decentralized any kind of computation you end up in this environment where you have to add verifiability because you need to certify that something is done correctly and you have to add privacy meaning like the data should be not readable by the person that's computing. If you wanna use cryptographic methods for this it's extremely expensive. If you add verifiability it's super expensive. You add privacy way more expensive. So you end up in this environment where the centralized cloud can be high performance and everything else kind of sucks. However, there are still many applications where you wanna do this but only in those applications you don't wanna do it for everything. So this is going to create an environment where we're going to have many different computational networks based on different cryptographic approaches. Zero knowledge compute networks will be different from fully homomorphic encryption networks which will be different from multi-party computation networks and so on. And what we need to do is enable those different blockchain networks to interface together really, really well. And so if you do that well you can actually get to massive scale data science all of this sort of coordinated and think of all the data pipelines built with IPC. And really kind of the kind of target use case that I've been working with the, because this is that team on is like actually be able to run entire virtual worlds actually put in a game like Minecraft. I think this isn't normally a GIF but a PDF doesn't support GIF yet. So we won't be able to see the GIF. But imagine being able to have an entire virtual world like Minecraft and all of the interactions of the players in that speed being able to be tracked with the hard verifiability of consensus in a blockchain setting. So like that's where we're headed. So being able to do this kind of massive scale computation not just for one virtual world but for all of the shards of the worlds that all the players want to play. So if anybody has worked on game design you end up in an environment where you have to create tons of different servers because people want to play with different subsets and all that kind of stuff. So you're able to create those kinds of environments and that's what we're shooting for. Cool, hopefully this was a good summary of IPC applications. Again, sorry for being late and so on and for messing up all the slides. But great, any questions? You were saying that the internet architecture back, back, back. Yeah, you can give time. It has some partition problems, right? Yeah, I mean, you cannot, like, this is the real way that the internet is shaped, right? And so a bunch of times these links are gonna break. Yes, yes, of course. The thing is most of the systems that we are creating at the moment are based on the application layer and that is more like how the computers the ASs are connected physically and some protocols are on that. Yeah, yes, so imagine writing a consensus protocol where like the ASs and BGP could be negotiated in a crypto setting, but you need to be able to operate where the connectivity itself is changed by the rules that you're deciding, right? The question is exactly that. So at which level, at which layer, do you think we can create protocols that effectively break those problems? Well, so what I would say is like we probably need one consensus layer for all of the solar system, one consensus layer underneath that for like the inner planets. This is a latency's argument. One for Earth, one for Mars and so on. This kind of setting. And then once you are in within Earth, then you land in layers like these where you kind of want one layer for the regions like the AWS and Google Cloud and so on. The decisions around the regions there are really good. You don't need to fight those. One for that and then one layer for the data center specifically. So you wanna go to a specific data center and have one layer there. So it's kind of like four, five or six layers. And so however you don't have to start at the top, you can actually start with the Earth one and then later create a new route and kind of migrate and so on. Or like, you know, take the route and then like slow it down, change the block time, slow it down, move it up into space and then keep going from there. Does that make sense? Yes, yes, thank you. So it really means interplanetary, so just so we're clear. Any other question? Oh yeah, over there. Thanks Juan, real quick. So does it sounds like this breaks cap theorem because you have both consistency and availability? No, so you don't have full availability across the entire network. So this is partition tolerant, which means you're gonna have one partition with consistency in that partition. Then you have another partition with another consensus subnet with consistency in that region. Now you cannot transact on the other sides when you don't have connectivity. In order to do that, as Alfonso was explaining, you have to like do these transactions across. So basically the idea, the insight is you say, hey, like putting consensus, putting everyone in the same consensus protocol doesn't make sense, what you should do is like ahead of time move the state where you wanna transact it, then transact it really fast, then move it somewhere else. This is a traditional computer science engineering point. It's like, hey, you wanna edit data really fast, move it from the rest of the world into your disk, from your disk into your RAM, from your RAM into your L3, L2, L1 cache, and then feed it into the CPU and I keep operating and just move things around, depending on like how fast you wanna go. And so the point is like if you wanna go fast with partition tolerance, like move the state in there, like you have some files in your computer because you have to operate this connected from the internet, same way, like move some of your smart contracts and some of the state into a subnet, then you can operate. But it doesn't break, so CAP still applies because you only get consistency there. You have to wait until like you connect to the rest of the group so that have full consistency and full view of the entire world. So that means you keep most super valuable assets on the safest consensus all the way up at the top and then you move only the stuff that you wanna transact fast into those lower spots. So would there be eventual consistency across the hierarchy then? Yeah, but it's hard consistency locally. You eventually get consistency, but let's not use that term because eventual consistency allows like different versions and it's like a sloppy system by design. This is not sloppy. This is when the system returns, you're gonna get the results. You're gonna get the results after a while. So it's like a delayed, it's like, yeah. So from the phrasing, it is eventual consistency but the traditional literature of eventual consistency is sloppy or uses sloppiness. This is not use sloppiness. You don't get like two values. You only get one. If you get two values as a slashable consensus condition and you like wreck them, so not too hard. All right, cool. Thank you. There's one last question there. I think Cissit was already asked. Yes, no. It's a quick question. How do you prevent excessive centralization at the lower levels of this hierarchy? So like at the root? At the bottom. Yeah. So it's easy to be cheap if you're super centralized, right? Oh, but don't let me find. Like you can bound the, think of that area as like a bounded security layer with like a bounded economy and you like you go in and move your contracts and assets in there if you're willing to play by those rules and you just don't move all your value, right? Like when you go out in the middle of the night, you're not carrying all of your assets with you. And so you're carrying some assets and that's fine. Like if you get mugged or something, you lose some money but like you don't lose everything. So it's okay to be centralized at the bottom. It's not that it's rather that like the context is different so you should choose whatever makes sense. So you might still not wanna be centralized. If you're running a high frequency exchange, you don't wanna be, there you are gonna be moving massive amounts of money like billions to tens of billions of dollars being transacted really fast. And so you want very hard guarantees there and you don't want centralization but you want to be really fast. There what you do is you choose that because you want to go really fast. As you move into a data center, you remove the ability to talk to everyone else in the world because everyone, like it's too slow, the speed of light is a problem and that's okay. But you still need that decentralization in there. And by the way, zero knowledge helps here because if you make everything verifiable in sort of like a second step, you then get super nice guarantees. You get like the L2 rollups level guarantees which is pretty cool. All right, thank you. All right, fantastic. Thank you so much.