 Hello, everyone. I'm excited to dig into the deep research and development work at the heart of PL's computing breakthroughs. Innovation in the PL network spans the research and development pipeline and helps grow many early research projects into full-fledged production networks. There are a number of projects in the PL network in early-to-mid research stages, where ideas are being developed and de-risked, building into open test nets, and nascent new networks. Projects graduate from research into active development, launching alpha releases and attracting early adopters. The Protocol Labs network also has many products in the active productionization phase, gaining their first 100,000 users and iterating on their development success. And finally, a number of PL network projects have reached active production with millions of users and scalable business models. As you can see, there are a ton of projects actively crossing the research and development chasm, thanks to the PL network. We're going to deep dive into a number of these R&D breakthroughs in a moment. But first, I want to give a quick overview of a few other breakthroughs actively crossing the chasm. LERC is a Turing Complete programming language for ZK Snarks. This Lisp-like programming language allows verifiable computation over private data or in zero knowledge. So you can unlock distributed computation without sacrificing privacy. Zama brings fully homomorphic encryption to Web 2 and Web 3 networks, enabling compute over data to generate insights and power personalized applications without ever decrypting private data. BakaLow is a decentralized compute over data network where each node participates in executing computing jobs submitted to the cluster. BakaLow enables users to run arbitrary Docker containers and Wasm images against data stored in IPFS and Filecoin. Verifiable delay functions are cryptographic primitives that allow efficient and trustworthy time management and verification. The VDF Alliance is now sending its first VDF A6 into production, shipping later this year. Now to dive deeper into all of the research breakthroughs and vector commitments. Hello, everyone. This is Nikola, and I'm the lab lead for CryptoNet. CryptoNet is an open distributed research lab that is working on applied cryptography, protocol design to improve all the crypto networks that we've seen before. We go end-to-end in the research pipeline. We start from fundamental research, deploy protocol improvements, and launch products. We have over 60 collaborations, 15 research grants. Most of our projects land into Filecoin Improvement proposal or research papers. Today, I want to talk about a problem that is very early in the research pipeline. It is vector commitment. A problem like this one requires a large amount of external collaboration and several years to solve. Vector commitments are like a better form of medical trees. They could get us better proving time, better proof size, aggregatable proof, and more importantly, GPU cost reduction. And our main goal was to massively reduce the hardware cost for Filecoin Proofs. But a problem like this has massive ramifications. It could get us better verifiable computation over large data sets that could improve projects like compute over data, lurk, and even new possibilities like verifiable databases. And most importantly, we can use it to export the Filecoin chain into other chains. This could allow us to export storage across Web3. A problem like this one starts with a list writing down a list of open problems and distributing this as much as possible. Then we selected over 20 people to participate in our reading club, and we ran several events with over 120 participants. One in particular was the vector commitment day, first of its kind in the industry. Finally, we award research grants which were very successful, and we worked with several collaborators amongst which the Ethereum Foundation, and we worked on several attempts before coming up with the final solutions. We had five different vector commitments, then, unfortunately, were not practical. We even found an impossibility result that told us that there were paths that we cannot take. Finally, we worked on Colk and Mathets, which right now are just paper names, but the future is bright. It's the first time that we feel comfortable with the vector commitments that could be deployed to replace Miracle 3, and there is a full plan next year, eventually, by the end of next year, to get integrated into Filecoin. That's it. Thanks a lot. Hello, everyone. I'm going to talk about Consensus Lab and its flagship project, Interplanetary Consensus. So Consensus Lab is one of the newest groups in Resdev form like 15 months ago. Our goal is basically what we research is any problem that's related to consistency, availability, trade-offs between them that arise in distributed computing apply to essentially blockchains and decentralized world. This is what we research. And our focus is on Consensus, for example, total order broadcast. So why is this important is because Consensus is the bottleneck of decentralized computing and blockchains. So if you think about Bitcoin, it has limited throughput. Let's say seven transactions per second. Ethereum is radically better. But whatever Consensus protocol you put in, it's dependent on their Consensus protocol. Whatever Consensus protocol, ideal one you come up with, it's going to be a bottleneck at some point if it runs over every validator and every validator executes every transaction. So it's easy to see. And essentially, if we want to bring whole web 2 to web 3, we need something else. We need horizontal scaling. And we need improving these Consensus protocols while we essentially retain decentralization of security of these robust protocols that are slow. So this is what we are researching, essentially. So these are some web 3 requirements. So we are aiming to go to billions or trillion transactions per second. We need to be careful about latency. So it's a bunch of trade-offs here. And we want eventually to also be secure against nation's data attackers. So we have many conflicting requirements here that we are trying to research. One of them is this horizontal scalability and demand based throughput. And this is what we address in our flagship project, which is Interplanetary Consensus. So these are roughly three areas that we focus on. So horizontal scaling, efficient subnet consensus, and parallel execution of smart contract. And we are focusing on FVM for clear reasons. So how does IPC work in a nutshell? So you will have ability next year to go from the Filecoin mainnet basically to do some smart contract invocation and to launch a separate blockchain on which you will be able to cater better for your use case. For example, I have Saturn depicted here for retrieval markets, but it could be anything else. And if you want even faster performance on a data center scale, you can spawn a subnet of a subnet. And each of these subnets checkpoints and uses the security of higher level blockchain for its critical data, be it the state outputs or inputs. We are currently working on state outputs. And you can basically do this in parallel. So I'm here having a Metaverse Gaming example. And you would be able to tune the consensus protocols to suit, for example, the latency, different latency requirements of these different networks. So to power this, we have the mere BFT consensus framework, which implements consensus protocols on these subnets. And essentially, we are going to implement IPC very leanly with two FVM built-in actors. So this is roughly in one slide, basically how IPC works. This is our roadmap. We have a SpaceNet, our main test net launching end of this year. And the next year is reserved for interplanetary consensus on the SpaceNet with the go-to-go-to-filecoin mainnet in Q3. If you want to learn more about all of this, come to on Wednesday Consensus Lab Summit. This is Timeout Market. And we'll go in details into all these things. Thank you very much. All right, hello, everyone. My name is Raoul, and I'll be talking about the Filecoin Virtual Machine. So by this stage, you will have heard about the FVM. But just for those who haven't, just to recap, the FVM project delivers on-chain programmability to the Filecoin network. This is a big deal, because it enables developers, for the first time, to customize what the Filecoin network can do for its users, besides storage and retrieval. For the first time, developers get to build not with Filecoin, but on Filecoin as a platform. And these are just some of the apps and use cases that I'm particularly very excited about. Data douse, liquid staking, under collateralized lending, decentralized compute, and a lot more is possible with the FVM. On the technical front, when we conceived the FVM, we envisioned a system that could host multiple runtimes and serve as a seamless conductor between them. We drew inspiration from hypervisors, the actor model, and the Linux kernel. So this is what the FVM looks like today. It is based on WebAssembly. It can power multiple runtimes, like the EVM, Secure ECMAScript, and more. Each actor runs in isolation and can escape its sandbox through syscalls, inspired by the Linux model. All data is managed, and all data managed and exchanged is IPLD data. And it supports foreign addressing and foreign signature schemes. It also performs gas accounting and execution halt by instrumenting the Wasm bytecode. We expect many of these ideas will actually make it onto IPVM, which you'll hear about in a few minutes. Now, shipping the FVM itself is a hard engineering endeavor, as it transforms the entire execution layer of the Filecoin network. This is why we're delivering the FVM in stages. Earlier in July, the SCUR upgrade activated in Filecoin mainnet. Yes, you heard it right. The FVM is already powering mainnet as of today. But it is not programmable yet. And this is what we're working on. We're working on programmability in the M2.1 milestone. The first runtime we're shipping is the Filecoin EVM. We also call it FEBIM. This requires deep protocol developments, like a whole new address class, a count abstraction, and on-chain events. And on-chain events. We've implemented the Ethereum JSON RPC API in Lotus, which makes all of the Ethereum tooling compatible with Filecoin immediately. This allows Filecoin to meet existing Web3 developers where they are today. You can reuse all your existing knowledge of languages like Solidity and UL, libraries like Web3.js and ethers, and awesome tools like Truffle, Hard Hat, Foundry, and Remix. The Wallaby Testnet is our bleeding edge testnet. And it is already live. So you can start building on it today. We will be running several hackathons in November. And the FEBIM itself should reach Mainnet by February next year. At that point, we will move our focus to Wasm Actors and further protocol improvements. And guess what? Just before we arrived to Lisbon, we shipped a major release to the Wallaby Testnet. And we conducted our first ever metamask transaction. So if you're excited to build on FEBIM, you should join the FEBIM Foundry by following this link. There are more than 100 teams and builders that have registered, and we're excited to have you too. And these are just some of the things that past early builders have already built, SDKs and Go, assembly script and Rust, as well as playgrounds to get started quickly. And here is what early builders are building today, some open opportunities and some open protocol problems. So if you see any of these actually catches your attention, come speak to us. Most of the FEBIM team is actually here on site. These are our faces. So come find us and say hi. And before I go, I'd like to invite you to the FEBIM Open Hack Day, which is happening on the 29th of November in Adgarajem in Jardim, Novdabadil. We hope to see you there. Thank you very much. My name is Boris Mann. I'm the founder of Fission. I'm here to tell you about IPVM, the Interplanetary Virtual Machine. So what do we even mean by IPVM? We have a lot of IP acronyms these days, and we have no sign of slowing down. So for starters, this means adding a blessed virtual machine web assembly into every IPFS node. Like data, compute should be ubiquitous. Building on this foundation actually enables a number of things across the IPFS network. But for today, we'll focus on IPVM, replacing serverless platforms like AWS's Lambda with open protocols to power the future of computing for humanity. So what does it mean to build the HTTP of compute, to make compute as ubiquitous as what we think of as the web today? We're not talking about static web pages anymore. We're talking about combining and recombining data with compute for the many things that Juan has talked to us today. It means we can do things like take data, compute over it, and then cache it across the entire network, which in turn, other people can compute on top of. It means being able to compute locally, as well as remotely, instead of having to have pre-negotiated arrangements with the large commercial US-only hyper clouds that will control our future unless we build this together. There's many other projects that this connects in. This is not just us. It's all of us. The Filecoin virtual machine, Fluence's Aquamarine, Cloudflare workers, Baccalaureate, Web3 storage, and things like the IPFS function network. So we'd like to join the IPVM working group. My co-founder and CTO of Vision, Brooklyn, leads that process. We already have GitHub discussions live and regular community calls. And we're coordinating with the Compute Over Data Working Group. We're growing our team with some great new members who are going to help build this for all of us. But what even is Vision? We're a blue-green team that builds protocols so that other people can build great platforms and products on top of it. It's not just us. It's been amazing the past year to be invited to be part of the protocol network and understand what it means to build together. At Vision, we're still trying to get better at this, but we think this is a phrase that everyone should start adopting. Let's become ecosystem obsessed. We build a number of protocols, and we move things from research and graduating it to ecosystem projects that many people can build on top of. These are some of the ones that we have in the pipeline. We also have a side gig of creating cute stickers and mascots. So feel free to drop by and get some for me during this week. There's amazing folks that are already building on top of this, both within the PLN and beyond it. You'll hear more later in this presentation how Web 3 Storage is adopting UCAN, and we hope it can spread everywhere connecting our ecosystem better. Web Native is what Vision takes to mean that any front-end developer can take this stack and build on top of IPFS today. We've got a number of things in active development. Today, you've heard about the IPM working group, and we're also having sessions later this week about the Filecoin account working group and private data on Filecoin built by these lower-level protocols. One last thing. I'd like to invite you all to our first conference on the future of computing. Coastal Islands will take place in Toronto in April 2023. I hope to see you there. Filecoin Storage and Retrieval is actively crossing the chasm from development to productionizing. In 2022, we shipped a number of new products to help solidify the full lifecycle from data onboarding to retrieval. This is at the core of what we work on here in IPFS and Filecoin, and it's super important to actually storing and being a great network for preserving humanity's most important information. In this Filecoin system map, storage requests flow from storage clients to domain-specific on-ramps to storage providers who verifiably maintain storage over time. Retrieval requests flow from retrieval clients to retrieval providers to storage providers and then back, using a network of indexing services to identify where desired data is stored. Retrieval providers offer an additional layer of caching along with improved performance and accessibility. Let's dive into this flow, starting with the on-ramps. NFT.Storage is a simple developer on-ramp for NFT creators, now supporting over 95 million NFTs, 275 terabytes of data stored in Filecoin, and NFT marketplaces like OpenSea, MagicEden, and Rarible. Moving on to the storage provider section, Boost is the new Filecoin markets node and shipped earlier this year in May. It's scaling data onboarding for storage providers and also adding new retrieval transports, including HTTP and now BitSwap, to support easy retrieval via IPFS clients like Kubo. Moving on to the next section around indexing, the network indexer ingest billions of CIDs every week from over 120 Filecoin storage providers and large-scale IPFS pinning services. There are currently six independent indexer mirrors providing accelerated routing lookups to both IPFS and Filecoin content discovery. Retrieval providers have seen the most innovation in 2022. The IPFS gateway added integrations with network indexers and Boost for fast lookup and retrievals across Filecoin storage providers. The ipfs.io gateway has also scaled to over 12 million weekly users and 1.5 billion requests. It's pretty impressive. Saturn is a new Filecoin CDN, already operating 62 L1 test net nodes with 64 terabytes of data retrieved daily and over 1 billion retrievals every week. Station is a fill worker node that allows putting spare computing resources to work in the Filecoin network to start earning Filecoin. Finally, highlighting the retrieval clients. IPFS and gateway-powered browsers like Brave, Chrome, Firefox, and more connect end users with data on IPFS and Filecoin to make sure we complete the full lifecycle from data storage all the way through Retrieval. So that's our storage and retrieval lifecycle. There's been a ton of work put into this system helping connect all of the pieces together and make sure folks can have smooth, robust, and reliable retrievals over time. Next up, let's hear from the amazing startups harnessing these R&D breakthroughs.