 I'm going to give a quick background for computer over data from the FACM perspective. I'm going to talk about scaling, blocking computation, and a bunch of work that we have on that. I'm going to talk a bit about cloud computing as a background for the decentralized computing networks part. I'll talk about a triangle thing that I think is a key consideration when thinking about designing decentralized computing networks. I'll give an overview of a bunch of common parts that we should be working on detangling and making libraries for. And then I'll talk about the contrast of those from specific networks. And the goal of that part of the session is to get the whole room thinking about which parts of these are meant to be specific to a computational network or common libraries that everybody should be contributing. And I want to finish up by talking about IPLD programs, which is a specific part that is missing in order to enable a lot of this stuff. Cool, let's go. So I'm going to assume a lot of familiarity with FACM already. I'm just going to kind of place through this just to page this kind of stuff into your mind. So this is the community roadmap. There's a lot of people working on a ton of things here. There's an enormous amount of growth in the storage provider world. So there's a lot of SPS that are scaling up their operations, both the capacity and the storage. You can see kind of the growing amount of useful storage with FACM Plus and so on. My favorite graph of these right now is the new committed deals in terabytes, which just crossed over a petabyte in the last few days. Or last week, the last few days have gone back down. I don't know why, but let's get back up there. One important component of all of this is that the whole strategy of using developer-oriented on-ramps, so this means specific products tuned to a specific use case, has been extremely successful. So when you're building a pretty general computational platform, you run the risk of being so general that it's very difficult to use. And we not only run that risk, we are very much so in that environment where because the power is so large, you end up with extremely complex interfaces. But then when you create a very specific use case-oriented developer on-ramp, you can narrow down the complexity, focus the APIs, focus the marketing, focus the tooling and so on, just to target that use case. And that has been extremely successful. So I would imagine a whole bunch of these will emerge. So already people think I've been in a bunch of discussions with people thinking about video.storage and archive.storage and all different kinds of different on-ramps. One other component is we should be able to see large-scale, we should be able to see faster use of the capacity by being able to use SNAP deals. The retrieval stuff is coming online, so we should be able to get sub-second retrieval now for whatever we think is in the hot data set. So that means ahead of time, we can identify specific segments of data and then pre-cache those around the world to be able to retrieve them quickly. And of course, the most important piece of all of this is the FEM. So the FEM is inactive development now. Milestone 0.5, I think, shipped last week or some point recently. And the FEM Milestone 1 is ahead. I think what everybody really needs is Milestone 2, which is kind of, I think, Q3, Q4-ish sometime this year. If you're not familiar with FEM, highly recommend going to check it all out and understand kind of the ideas behind it and so on. The big takeaway for this conversation is that you should think of it as a piece of code that enables the FALCOIN blockchain to run arbitrary programs and that it's oriented as a hypervisor. So it's meant to be able to run multiple different foreign runtimes. So that means think of EVM contracts and being able to run those on top of FALCOIN, being able to run many other different kinds of smart contract machines. But another very important component here is if you look at the bottom of the FEM, there's another interface here that we haven't kind of explicitly ripped out, which is maybe described as IPLD-wasm here, which is just all of what you need to be able to execute IPLD-oriented programs. If you were thinking about it as like an IPFS VM, that's like a component that we're going to pull out without all of the FALCOIN-specific or blockchain specific components, and then take that VM and now make it available to IPFS implementations. So imagine arbitrary IPFS implementations being able to run arbitrary code by addressing it with IPLD-linking. Cool, so there's a cool website for FEM that kind of describes a bunch of use cases. The key one here is the centralized compute, which is what we're all here to talk about. All right, now let's get into the exciting stuff here. So there's several talks that people have given over the last year or two, talking about different kinds of computation over data that you could do. There's Raul's talk announcing the FEM that kind of goes through a bunch of different use cases, the ones at the bottom left, and kind of walks through the kinds of things that are going to become possible. I've given one on blockchain competition models that kind of traces different ways of doing competition at scale, and kind of the thrust of that talk is talking about how in reality, we're going to do the same thing that the cloud computing world did, which is get to a point where we can do task schedulers and standard job issuance and so on in the traditional way, just we're going to kind of enable that through blockchains. And I'll dive into more detail there. One thing that's already starting to happen is that we have already a whole setup where you have large-scale Falkland storage providers with a ton of CPUs and GPUs that can then be used for computation. And I think Charles is here. There's a picture of him demoing the computational platform that he already built. He needs some team built. There's notebook interface where you could run, I think, iPython notebooks and already issued computation to the SPs that they run. This is the kind of thing that we want to enable just at large scale with arbitrary kinds of computing and so on. So there's another whole part of this, which is there's a bunch of work that we're doing to scale the amount of computation that you can do in a blockchain. And there's a whole kind of other project run by Consensus Lab that you can look at that describes the kind of next generation consensus protocols to be able to do this. I think I'll, yeah, I think I'll dive a little bit more into detail on this. Maybe, yeah, the specifics here that you should be aware of is we're really trying to target the level of scalability that the traditional cloud has. So most blockchains today have this extremely slow transactional throughput where, I think the fastest blockchain today is what, like 10,000 or 100,000 transactions per second, which is laughable compared to the cloud, right? So the cloud does, orders amounted more work. Let me see if I can, so the entire blockchain space has to reach this kind of large scale, traditional web scale or internet scale level of computation. And so by, you know, back of the envelope calculation, like that really means billions of transactions per second or trillions of transactions per second. And so that's where the blockchain world is headed. That is many orders of magnitude away from where we are today, but that's what we're willing towards. One other component here is that most computing applications require really fast finality. This whole nonsense of you only get finality, you know, 10 seconds after is not gonna work. So when you deal with large scale computation, you need to be able to do very fast local finality within a single data center. Ideally sub millisecond, and you know, if you're in a data center, you can deal with things in like the hundreds of microseconds and so on. So that's the kind of operation that you want. And how are we gonna get blockchains to get there? And this is the tree structure you see in the top right is how we'll get there. We'll use consensus scaling techniques to split off consensus groups to be able to operate in small regions. So we can deal with like the speed of light problem, right, so the reason blockchains are slow is speed of light. There's a whole other talk that you can go watch about all of this. If this sounds really interesting, go talk to Alfonso, or go watch the talks that have been put out about all this. So super exciting stuff. One important thing to remember here is one really simple, cheap approach to compute over data is to hang entire computational rings on the bottom of that tree. So the very bottom of that tree, you can spin up a very small consensus group with like, I don't know, five nodes, 10 nodes, and then run arbitrary jobs there. So probably that'll get unpacked over the day or the next day. One other thing to keep in mind is the blockchain world is experimenting with new kinds of media and new kinds of programmable media. So all the NFT stuff is just scratching the surface. We're now already seeing 3D environments and 3D rooms and video and imagine now dynamic experiential video and art and so on. So all of that stuff is gonna require lots of computational tools hooked into the blockchain world. So that means not only real time, but kind of post-processing stuff. So imagine all the traditional creative processing pipelines of like dealing with all of the media assets and so on and being able to deal with all of that computation entirely in the centralized context. Like that's where all of this is headed and we're pretty far away from being able to do that. And all of these pieces gonna get put together into the kind of metaverse kind of structure. One other thing for people to remember, there's a lot of flashing for the different proof of state blockchains. It's like the blockchain party. Um, uns, uns, uns, uns. Now that, so all of these programs are going to, so all of these blockchains are gonna have a bunch of different programs and so on and they're all gonna wanna do large scale data processing. All of them already wanna do large scale data storage and a ton of them are using popcorn and a bunch of bridges are gonna be built out. Now imagine being able to do all of the computational processing surrounding all of those. Cool, so that's kind of like the lightning fast background. So hopefully I didn't leave anyone in the dust but there's a bunch of talks that describe all of this. All I wanted to do was kind of page that into your mind so that we then get to specifics. This is one of the latest drawings that we've been using to represent kind of how Filecoin roughly works where you have kind of storage clients using these on ramps to then store a bunch of data and source providers. Then these, then all this data gets indexed by these indexer tools and then we have retrieval providers pulling the data out and serving it out to retrieval clients. Now of course some source clients go directly to source providers and back out but those are mostly large data, large clients. So the whole goal is being able to run jobs there. Why there and not anywhere else? Because data is really heavy, data has gravity. Once you've moved petabytes somewhere you really don't wanna move them again. So you wanna move all of the computation right where the data is, run it there, take the outputs, store them in the same network and then allow parties to view that data from there. Now in some cases you are gonna have to move data. Of course you can't like have everything. Sometimes you'll need to run some job across a bunch of data in different places. And so then you can run pieces of the computation there, take the intermediate outputs, ship those intermediate outputs elsewhere and then combine those things. So think of like the traditional map reduce structure where you're kind of like running a whole bunch of the initial programs wherever the data is, take the intermediate outputs, move those around because they're usually a lot smaller. Not always but usually. And then be able to kind of do that kind of stuff. Luckily we are sitting on 20 to 30 years of excellent distributed systems programming tooling. So there's tons of stuff out there on how to do this really well, how to split computation jobs and so on. And so the goal is to kind of leverage as much as possible from that world while providing like the necessary hooks to make it work in a centralized context. Cool, so let's talk about scaling the computation. Yeah, okay, great. I wasn't sure if I had this whole slide over here. So just for a benchmark, the internet today deals with, is that too small, can you see that right now? Maybe there's a slide right near you. Just like spend a moment looking at that scale. And in addition to all of those operations, there's tons of backend pipeline jobs that have to run over all of that data to make it useful, right? So you upload a picture to Instagram, you upload a video to YouTube, tons of computational jobs run on top of that to make it usable and consumable by other people. So this is why there aren't any consumer web apps or consumer apps for people in blockchains. Yeah, they're already centralized. It's because all of that work can't be done today in a fully decentralized context. And we need to get to the point where like the computational networks can enable that. So just if you're wondering about use cases, just refer back to this one diagram, look at any one of these applications and be like, oh yeah, what needs to happen to an image? Like you need to be rescaled, you need to be, you have to do machine learning vision right so you can tag your friends. You have to like push it to people's feeds. You have to like collate all the feeds. You have to like index all of that stuff. You have to draw all these relationships and pieces. All of those kinds of things are the data pipeline so we wanna be able to run in a decentralized context. Cool, so let's dive into competition models in a moment. Before I get there, just to like kind of beat that horse, today there's no single blockchain protocol that can handle internet transactional volumes and none of them can handle machine critical deployments. So this means what's defined as a mission critical deployment is something that can run when part of the internet starts disappearing or when your community or network gets disconnected from the rest of the world. So imagine like if something bad happens and suddenly everything falls apart and like nothing works, like cell phones don't work, can't send messages to anybody and so on. Like that obviously we don't wanna build systems like that but unfortunately that's what blockchains are today. Let me to get out of that spot and that's where all the consensus scaling work is going is to enable entire consensus groups to work through network partitions and without having to deal with like the speed of light problem. So as you're thinking of designing competition networks don't limit your thinking to think about the current blockchains of today that only deal with small transactional throughputs or where gas is extremely expensive. Just think of the schedulers actually being able to do a lot more but think of the schedulers as being not there, not just having one single scheduler but having one per consensus group. So if you have say one scheduler inside of one data center you can do, you actually can run a bunch of contract or a bunch of transactions in a blockchain context in a decentralized context with cryptocurrency just within a single data center. So yeah, that's where like all this is headed. Now of course if you wanna ship this here there you do have to assume one large blockchain and so on. Well actually yeah, I'll just cover this quickly. So I think everybody here is fairly familiar with the cloud computing world. Pretty much most applications that you use today rely on cloud systems, either private or public systems and you have these like massive data centers dedicated to all of this and whatever program you're running has like tons of other computation that runs around it to schedule it, coordinate it, move around the data and so on. So you have these enormous infrastructure projects to enable like running your random little like counter program or like your little like image tagger. And so I think like whenever you're like fishing for use cases or whatever just refer back to you know like whatever main application you would like to run and then kind of start decomposing it out into what pipelines you think might have to run or like look it up online. I wanna kind of like talk a little bit about this stuff so at the end of the day there's an enormous amount of code today that is already structured to run on the traditional cloud systems and so that means VMs or containers and it would be really cool if blockchains could run all of that. Unfortunately, it's pretty hard because those VMs and those containers are not deterministic and so you can't guarantee that you can run the same thing twice and that's very viable. So like I think it'd be great to get there. It's actually quite difficult to do this well. I would recommend like going after much simpler versions first going after kind of things that are easy to verify and then later kind of doubling back and then figuring out how to do as well. I do think that blockchains will have to solve this problem well or you know the cloud system shift won't happen because there's just tons of stuff that people have built over the last 30 to 50 years that needs to run. Yeah, 50 years really. There's like random Fortran stuff running in AWS you can bet. You can totally go back and look at most of the cloud computing papers and think about how those architectures might apply today. So you can think of like Borg and Kubernetes and so on and kind of a squint at it and say like, oh yeah, the Kubernetes master thing that could totally be a scheduler on the blockchain. These KubeLed things, these could be specific jobs that run anywhere that kind of interact with this master and then figure out the job specs and so on. Now the problem though is that you need to make this verifiable or an ideally kind of incentive compatible so you can get correctness out of the code. But in general, if you try and like fit the same kind of structure, things will be way, way, way easier. And that's just one program of one system of many. You can think of like about Lambda or Cloudflare workers and so on. Like all of that stuff is super interesting and super ripe for bringing into the blockchain context. Especially Cloudflare workers where you can kind of run computing jobs like very close to the user. Like that's also super interesting. That's kind of like a different problem from compute over data, which is kind of more about running computing pipelines over long data. So running computing pipelines is probably easier from like a latency, computing latency perspective, running jobs close to the user is different. You want to minimize latency to the end user. A couple of things here, which is kind of, there's a whole set of different zero knowledge proofs oriented or verifiable computation oriented systems, things like Metaproof and Lurk, which is the, I think Metaproof got renamed Lurk and there's a bunch of other languages that enable you to write proofs about the computation. Lean into that, like whenever the computation is not too expensive, like you can do that. And also reflect that fully homomorphic encryption is coming as well. So it's a whole other possibility. So it brings me to the triangle. So I thought a lot about this for the last, I don't know, five, eight years about kind of like, and it seemed to me then five, eight years ago that this was gonna be an extremely difficult problem to solve and that pretty much nobody but it was gonna figure it out because everyone tries to solve everything and the answer is you can't. Why can't you? So there are three key properties that you want in any kind of decentralized computation system. You wanna be able to have performance, verifiability, and privacy. What does that mean? Performance means you want the computation to be cheap, you want it to be fast, you want it to be low latency to the user. So that means low computational complexity. You don't want a lot of overhead for running the thing, right? Like if you blow up the computation up by 10x, like that's really expensive. Also if it takes forever to run or an unbounded amount of time like that, that also doesn't work. And ideally like it shouldn't require massive amounts of computation or hardware or specialized hardware and so on. So if you want things to be high performance you kind of wanna be towards the one side of the triangle. Then you enter, then you have verifiability requirements. So in the traditional private cloud you can just trust the brand, right? Like there's an economic approach to verifiability. If you find out that like AWS is secretly changing your computation, you're gonna freak out about it, you're gonna write a big blog post, people are gonna look into it, they're gonna see that that's right and then Amazon's stocks are gonna fall and then a whole set of problems are gonna happen for Amazon. And so you get like this very, very successful economic verifiability over the computation. You don't have to worry that Amazon is gonna change your stuff. Now you can use similar kinds of structures in the decentralized context and replace brand with stake. So you can require stake in the picture, you can run kind of optimistic computations and then check them later. Or you can do, you can bring like the cryptography to bear and instead use zero knowledge proofs or MPC and so on to like actually get cryptographic correctness proofs and like that's a much better place to be. However, those are less performant. Those proving systems introduce a bunch of additional runs or other computation they need to run that just decrease the performance, right? So there's like the straight up between verifiability and performance. The more verifiability you want, the lower performance is going to be. And this gets worse with privacy. So this is required by most applications. So you get into a situation where like most real world applications deal with user data in some way. You want hard guarantees about privacy there. You want the data to be encrypted to the rest. You ideally want the computers not to like look at the data. And so that's where this becomes extremely difficult in centralized computation. Because in the traditional world, you can lean on brand again with that same kind of economic structure. And sure, like some data does leak. Some data is lost, but not that much. Now unfortunately, stake is way harder to make this work with because it's extremely difficult to detect that something is actually leaked. So this is where you ideally do wanna introduce cryptographic primitives to get certainty. But the cryptographic primitives for this are or there's a magnitude more expensive. So that this is where you end up with fully homomorphic encryption or MPC or things like that that are just dramatically more expensive than the traditional stuff. So again, this is why right now we don't have any good setups. So these are like, you know, ideally you want all of these three to the fullest extent. You want everything to be super high performance. You want it verifiable and you want it private. Unfortunately, these trade-offs drop out of the loss of physics. So you can't just, like if you wanna introduce any kind of additional computation to get some verifiability, you will necessarily go slower. And if you're not controlling the physical atoms and energy that's going into your competition, if that's controlled by somebody else, you will certainly require some additional competition to make sure that they're not cheating in some way or that they're not leaking your data. So necessarily there's like this unfortunate trade-off where it is impossible to get perfect performance. Now you can get pretty close and over time our cryptographic methods have been improving a lot. Like today, tons of methods are or is mounted faster than when they were first proposed. So all of these things will improve. But you can sort of like think of creating a map of the different kind of techniques in different sort of spots. And this is kind of roughly where these might land. Now it's not like ideal, but like the whole point is to show that the different kind of specific technique and specific approach will land you somewhere else. What's worse here is that each of these techniques mean wildly different performance trade-offs. So some of these are many orders of magnitude slower. So two orders of magnitude slower, five orders of magnitude slower and so on. That also means, and they also have different verifiability protocols. So for example, if you run things optimistically and then you wanna check them, so you run a program first, you store the outputs, you then later at some point in time, you randomly choose to check some of it and you will run some of that competition again. That means it's a high latency operation, right? You run the program once, you can't commit to it yet. You have to wait until later and then later you check it and then you can submit it. Ideally, you would be able to write a proof and just immediately get verification. But if you do that, now you end up with like much higher performance requirements or potentially new hardware. So people are trying to scale all those knowledge proof stuff by producing specific hardware to tune for that. So again, the point is you end up with very different protocols here. So this means each of these techniques, each of these technologies yields different protocol schemes, different security properties, different computational complexity, different network use, different hardware and so on. And ultimately, this will also mean different use cases and that means different users and probably different brands. So like, you know, five years ago when I was thinking about this, oh yeah, this means like, there's not gonna be a single network that can claim to be just the decentralized computing network for crypto. And every network that tries to do that is gonna fail because we're not failed, but like it's not gonna really be able to match that requirement because all of the complexity here between all these approaches. So there's two basic options, I think. One option is to do different networks, which is way cheaper, way easier and kind of like compatible with how the crypto world is following this. The other approach is to use programming languages to under the hood, as you write a program, under the hood, take pieces of it and then run it in a different system depending on what part of the program requires. The machine learning world has gone in that direction. So today you can write a machine learning program in Python or something like that and then describe the kinds of operations you wanna run and the compiler tooling is so good that like it'll grab chunks of the program, expand it out to some other large scale computation meant to run in some particular system like TensorFlow or whatever and then piece all of it back together when it gives it back to you. GPUs sort of like work this way. So we could theoretically go in that direction and then just plug in all of these special purpose types of things in there. But I think like that is way harder, way slower and the team required to be able to do that, like so big that I just like wouldn't bet on that approach. I would bet on the approach to start with a bunch of networks, build all of these things out and then later you can unify them with programming languages. So these are not mutually exclusive. I think one of them is way more likely to work. Cool, so now let's talk about common parts. So kind of architecture wise, most of these systems no matter what the techniques that you use will probably follow computing pipelines that are pretty similar. You have a bunch of users for submitting programs. Those programs enter a scheduler, the scheduler picks a set of workers or the workers kind of like register with a scheduler and pick things up a queue. Those workers do the work and they submit the results and then eventually a user gets the results back. You might have some auditing different parts of the picture maybe the workers submit or approve maybe the auditing happens afterwards a separate step but for the most part the computing pipeline is pretty similar. What this means is that you can write composable tooling for all of this that just kind of be useful across all these networks. And we probably need some kind of spec here for decentralized versions of these because a lot of specs for computing pipelines exist in a centralized world but the decentralized world needs to introduce some auditing or some verifiability in a bunch of key places. So that means that there's an opening for writing a pretty good spec for how to do these things. Component two programming languages. Some of these technologies like zero knowledge proofs and FHT and so on will potentially require a DSL but ideally the users want any language. So how are we going to enable that? VMs, so VMs all the way down take whatever language the user wants and then compile it down to whatever DSL you need and then run that. Thankfully we don't have to do much here because like this is how the world works. Now because of that because all the systems use virtualization we encapsulate all these things. We should pick like a really good target. I think wasm is a pretty safe one. The world has for the most part moved to wasm. LVM IR is also a good one. Good thing is you can like move from one to the other. So wasm is better because it's stable like it's a deterministic compilation output and you can do that to LVM IR but it's not as good. We need kind of like the IPLD subset of FBM to be defined. So this is something that like Role and Steven and others could work that could they can't we're gonna, they're really busy maybe ask them how to go do this. We also need to kind of define the program formats. So that means we need to think about what it looks like to create a function feed an input into that function take the outputs and then store those outputs all of those pieces. So how we define the function how we define the inputs how to defend the outputs. Ideally we want to write in a pretty general way that's independent of a specific technique or technology or whatever and it's kind of parametrized so that we can come up with like one program spec that just can run arbitrary things. We want to lean on the fact that compilation steps can be introduced anywhere. So the world is really used to compiler tooling and different kinds of tooling mangling programs. So you got some hard problem to solve see if you can figure it out with some compiler tooling and then you can introduce this somewhere and the user doesn't have to worry about it. So really lean on building the tools but you can't not solve the problem not write that tool and then expect the problem to be solved. So you either like get the user to solve it or build a good compiler tool. Schedulers. So I think the schedulers are probably pretty general and can probably be the same across all these systems. I don't think you need systems specific schedulers. So that means that we should be able to create like the same general schedulers that could be used by all these different networks. And there's basically two tiers of scheduler and it's something that David and I we're talking about like a much months ago there's kind of like a system wide network oriented scheduler that is meant to run entire pipelines. And then the single pipeline might have its own local scheduler for your program that decides about how to run, how many steps and so on when that and that's where you put in all the knowledge about whether or not to kind of keep running or abort or something like that. But either way, these kind of like two types of schedulers there might be more types but I think basically you can abstract out all of the specific technology stuff and then just think of the schedulers as pretty general. And that means that we could be building these general scheduler tools, right? Either have a spec run implementation that then can run in arbitrary smart contract platforms. Now monitoring, this is a really key part of these systems as soon as you have like thousands of jobs running or tens of thousands of job running you need to know what the hell is going on because this gets incredibly difficult to debug. You have tons of like distributed systems going on you have random users submitting random stuff doing who knows what you need to be able to like know exactly what's going on across the whole system in specific nodes and whether pulling off specific jobs and so on. So you need like a ton of tooling that is gonna show and clearly visualize or clearly like error check what's going on across the system. Good news, the whole world has been doing this for 20 years. So a lot of it can be adapted but it probably will need to be different because here we're gonna have to deal with looking at blockchains and looking at specific decentralized nodes that may or may not lie to you. So you end up in like a different spot. So we probably have to write new monitoring tooling but again, this I think can be really general. This is not kind of network specific. Then kind of all the inputs and outputs we can kind of put this as part of the the IHOD program spec we can figure out how to like have like a common wrapper data format or something to be able to kind of identify all these things and then kind of make all of these pieces chainable. This is I think a very significant step towards getting that unification of programming later because if you wanna run something in like MPC and then FAT or like, you know in some other, you know, zero knowledge proofs or whatever if all of these things speak the same input and output data formats then it's all gonna get easier, right? So writing general schedulers and writing the IO program spec that I think could be the highest leverage thing that anybody could do to get to these kind of like unification of computing networks later. The UX is really important. So recognize that these systems will have these specific networks will have a subset of targets. So some of them, for example if they require specialized hardware they're not gonna be able to run anywhere. Some other networks might be able to leverage things like the GPU is a new mobile device and then kind of target to target that. And so that means like specific work in the UX might have to emerge. But a bunch of them will have like subsets that are very similar. So think of like the general scheduler think of like the general monitoring tools some line of that is gonna be the same. And then think of like writing work in node UX that's like meant to be reshaped by whoever creates on a specific network. The last thing is like all of this stuff is not gonna work unless there is excellent developer UX. So at the end of the day people need to use a system people need to write like complex programs with it they need like extremely good tools they need like super intuitive ways to write the job specs they need to like be able to run the things they need to be able to simulate the thing running to test their own program. They need excellent debugging tool excellent monitoring tools. They want like accounting that is reasonable and it works with their organization which might be like a corporation or might be a Dow or might be an individual they need like access control to the results. So that means like capability crypto that enables like the encryption of the results and then you need excellent docs, right? Like ref tutorial and so on. So again here the world has an enormous amount of like really good examples to learn from but this is an enormous amount of work. So the developer UX to get this good to get this to be adopted by a ton of people this is like tends to hundreds of people's work is gonna go into like creating excellent developer UX. And so before you have excellent developer UX don't expect a lot of adoption just expect kind of like maybe blockchain specific world adoption. Cool. So it kind of like really want to encourage this entire summit to really think about like splitting out the common system tooling from the decentralized computing network specific tooling, right? So all the stuff that can be common and create libraries or I think of it like building like the little P2P for computer over data you wanna create a bunch of like little pluggable pieces that can then be pieced together and you can then write specific decentralized computer network, right? Whether it's an optimistic one or a CKP one or an FHE one or whatever. And by the way like the incentive structures of these because they have to these infrastructure have to be related to the verification process and the proving process and so on then they look different everywhere. So if you wanna raise infrastructure for an FHE network or a CKP network they look very different than an optimistic network. That's another reason why it's way easier to just write different networks. But point is like really think about like back out being a project where we build one of these specific networks in order to build up this common system tooling so then we can enable a lot of groups to build a bunch of specific networks, right? So it's like you have to build a system that actually works to test it out and to like really make sure that you have a real thing that will really work and that ideally gets to like really high performance like very large scale hundreds of thousands of nodes or like millions and millions or tens of millions of jobs and use that to like then drive improvement of all of this common system tooling so then arbitrary other decentralized computing networks can be built. Cool. So the last thing I wanted to touch on was like IPLD programs which is kind of like this piece of the FEM diagram where there needs to be kind of like a spec written about that of like really figuring out what it means to run a wasm program hashed through IPLD and so on. And then kind of like that is kind of like the there was a thing called like the IPLD FSEM that's what it would be. Maybe the subset of FEM, I don't know but I think we basically need that. And we're gonna be able to like enable arbitrary computation. So this is not just smart contracts, right? This means like very large programs but potentially and they kind of be able to do kind of program execution and function invocation and so on. So I think like this is like a great topic for a kind of conference session. All right, cool. I'm gonna stop there. Thanks so much.