 Welcome to Lab Week. We have a great schedule of events the entire week. We're starting off and kicking off with PL Summit. We will have a four-section part of Summit today. We'll go through the PL vision, PL impact, the PL infrastructure that you can leverage, and we'll talk about next year. Protocol Labs drives breakthroughs in computing to push humanity forward. This is our mission and vision. In order to push humanity forward, we need to take on a whole range of submissions, lots of hard projects that involve lots of people around the world, thousands, tens of thousands, sometimes millions of people, things like securing the internet and establishing digital human rights, things like upgrading our economies and governance systems, things like developing safe transition technologies. In order to do all of this, we need to drive breakthroughs in computing tech and other surrounding technologies. The best way to do that is to accelerate that R&D pipeline with an innovation network. In this section, I'll talk about that. We're the beneficiaries of many centuries of amazing global improvement. On almost every measure that we can think of, the human experience is getting dramatically better. You can go pick up any history book, you can look at any measured range of impact metrics, and humanity is getting dramatically better off every century. Now, at the same time, we have now leveraged technologies that we can use to wipe ourselves out. We're at the same time wiping out most of the other species on the planet, which is a terrible disaster, and we're pushing the planet into beyond boundaries that we think are safe, not just for most of the species on the planet, but ourselves. So we need to make it through this century. To make matters even more complicated, we now have technologies that can rewrite ourselves, that can rewrite our species, that can rewrite our genetic code, that can rewrite and transform how we compute with each other, how we communicate, how we control the world. To make matters even more complicated, we have inadequate macro systems in the world to organize ourselves. The large-scale systems for coordination worldwide were set up and inspired in the 1700s, evolved somewhat in the 1800s and the 1900s, and they haven't really seen a major upgrade to adjust to the different timescales of today. You have lots of different organizational systems in the world that can't really guide the transformations in this century. This is why things like climate change are stuck. This is why we can't get into agreements and save the billions of people around the planet and the tons of other species everywhere. So this is a major fundamental problem that we must fix. So this is an extremely critical century, and that means every decade we'll see extremely difficult challenges and extremely high potential, optimistic visions for the future. So it's up to us to change the trajectory of the world to optimize for a much better set of outcomes. Now, humanity has an enormous potential. We could, like Sagan says, we could go off to the stars and colonize the entire galaxy, potentially come in contact with other species, potentially come in contact with many other civilizations, and potentially even do long-range interstellar travel to visit many galaxies in other places. Our destiny, in a sense, is to keep upgrading and eventually reach different levels in the part of the scholarship scale. And who knows? But the point is, if we mess it up in this century, the entire four billion year history in this planet will come to an end. And who knows? Maybe some life will survive, but if we get things bad enough, then maybe we wipe out all life on this planet. So that would be a tremendous tragedy. And unfortunately, we have the power to do that. It's extremely, extremely difficult challenge to work through. Now, the optimistic perspective here is that when you look back to humanity, we face monumental challenges throughout our history, and every time we've come through, we've leveled up, we've grown, we've solved those problems, and we've gotten to a much better place. And so this century, we, the people in this room, the people, the thousands of people that will listen to this need to solve these problems. And we need to take responsibility for steering ourselves into a much better future. And along the way, we should do that with all of the species around us and flourish together. So that is what we mean by pushing humanity forward. In order to do that, there's lots of problems that we ourselves are working on. Lots of concrete, hard challenges in our day-to-day, in a few years or in a decade. So let's dive into some of these. Lots of people are very familiar with them, but it's useful to check in with them. And one of the coolest things about our network is that there are so many teams and projects working on these concrete missions, collaborating and going way faster by being able to leverage each other. So PL started with a strong orientation towards improving the internet. In the last 80 years, computing has radically transformed our species. We have now superpowers beyond what our ancestors imagined. Billions of humans and trillions of computers live and work together deeply integrated. And we're starting to generate artificial intelligence agents that are now also getting integrated. Most human activities assisted by computers and the internet is this amazing super power machine where humans can get together, dream up and use super power and deploy it around the world. Now at the same time, those same set of systems could be used for terrible things. Digital totalitarianism is very possible using technology of today. You don't need any improvements to technology to create terrible places, terrible states. And so it's up to us to create better infrastructure that protects human rights and creates a censorship resistant and privacy oriented infrastructure. Now these are the thoughts that inspired the Web 3 movement that caused so many of us to work in this field, to work extremely hard to put in place new new infrastructure. At the core of it is creating a trust layer through verifiability. Now that translates to a whole set of that translates to a to a platform that can enable us to build digital human rights and enshrine them directly in the technologies that we use it today, enshrine them directly in our supercomputers, enshrine them directly in your phones and all devices that you use. Those human rights translate to a specific set of systems that we need to put in place and develop and get adoption for. And what's really, really awesome is that so many teams across the network are working on all of these kinds of things. So many more logos can be connected to these specific human rights, these specific human systems. So one model that I've been using for the last year or so is to think about our human superpowers being this great thing that rides on top of the applications that we can develop. And that leverages the personal computing devices that we have. And that rides on this amazing scalable and secure computing infrastructure that we've built around the world. The internet gives rise to this thing, but the internet itself is organized through a whole set of contracts, license agreements, terms of service, corporations, governments, courts, and so on. And unfortunately, those systems in the bottom are fickle. There are lots of pressure points across them that can cause them to unwind the layers of the internet and to create a system that does not preserve human rights. The internet is not the same around the world. There's lots of places where this kind of connectivity is limited. And so in this era where humanity is totally transforming, where these exponential technologies are giving extreme powers to society, we need to make sure that that level of access is the same for all humans around the world. And we need to make sure that it's put in place on a foundation that is stronger than contracts. Things like math, incentives, economics, getting robust governance systems in place worldwide would be extremely useful to build the internet on top. So that's ultimately what Web 3 is really about. It's about building that underlying infrastructure layer out of public internet digital utilities. When you think about blockchain, all it really is, it's a system for coordinating a lot of parties to divide different sorts of work and organize them towards providing some service to a large community. You can build amazing utilities out of that kind of structure and with that enshrine these digital human rights. So tons of projects around the network work on this day to day, lots of projects with high interconnectivity, tons of teams are working on these set of missions, and I'm so proud of all of the work that we've achieved over the last 10 years. Now, one other mission that has been propping up in the last few years is that now that we have crypto systems and crypto economics, and we have better tools for governance systems to scale governance systems, we should be using them to improve all kinds of governance questions around our resources around how we organize ourselves. And so this kind of movement is directly going towards fixing this type of problem. It was born out of reflecting that we currently have deeply inadequate economic and governance systems that time and time again show us that we are incapable as a species globally to coordinate to solve extremely difficult large scale challenges. Climate change is probably the most well done of this. But there are other huge failures along the way, things like not getting fusion decades ago. And the same time the level of funding required for these kinds of things is vastly smaller than the level of funding that we give to many other kinds of products and services and so on that when push comes to shove, we would absolutely give up to have these amazing technologies. At the core of this problem is a fundamental misalignment in how we route our capital, how we route our resources, how we govern those systems, how we value things. And we need to be able to rewrite those rules to end up with a much better and much better aligned structures. To get there, which is a huge monumental challenge, to get there, we have to rewrite and restructure how our systems operate and how they scale. And one of the ones that we're most involved in is thinking about how R&D capital forms and gets deployed towards building these technologies. Now, we have crypto economics, which is amazing. We can coordinate our way out of these sorts of problems at massive scale. The prior coordination systems before the internet rely on extremely slow processes. And we can blockchains have shown that you can scale and coordinate millions of people around the world towards these kinds of goals. We're playing with extremely powerful levers and we need to orient them towards achieving this optimistic vision of the future. One way to think about this is to use mechanism design to orient and align everyone towards collaboration itself. And a lot of the movement in this sort of sub-movement of the crypto community is very oriented towards prototyping and experimenting with these mechanisms to be able to scale them. So a lot of this work is happening in conferences like Funding the Commons and Shelling Point. A lot of this is happening in the PL and Ethereum communities. A lot of this is happening in the Falcon community. It's going to be amazing to experiment with these sorts of things. One, there's a whole range of mechanisms to try. And we're now at a point where the ideas have manifested into lots of composable primitives that we now need to reorient and use in a lot of our other systems. I'm also extremely proud to reflect that so many of the teams across our network are working on these fundamental building blocks and interconnecting them with large-scale systems and large-scale levels of resource allocation. One possibility here is that as crypto networks grow, they're going to be able to start allocating capital and resources at the scales of nation states. We're about one or two orders of magnitude away from that. And if we can orient these crypto networks to fund massive scale public goods for all of humanity in this new form of public capital, that would be an incredible achievement. And so that's the potential and that's what's at stake. Now along the way, a number of our teams and projects and people have been focused on other interface technologies, on other areas of computing, things like virtual reality, augmented reality, AI and AGI, robotics, brain-computer interfaces and more. It tends to use this graph to reflect, you know, in the last 80 years computing has scaled to where we are today. If you project that forward another 80 years, it gets extremely difficult to predict where we'll be. Lots of different technologies in the next 10, 20 years are going to radically transform how we operate. And so this is where having a very strong R&D pipeline, it's a group, allocating resources, projects, people towards extremely hard challenges can create breakthroughs much faster than other systems. If we can orient ourselves well towards building and shaping those technologies towards good outcomes, we can make it safely through the century and we can end up realizing that potential. The technologies being built now, especially in AGI and especially in brain-computer interfaces, have the potential to put us on a very different, in different trajectories, some extremely negative, some extremely positive. And so this is a critical decade to orient a lot of our energy towards achieving success there. This is probably the newest area in our network and there's already a number of extremely successful projects that are monumental efforts in that ecosystem. And lots of teams and groups are coordinating and orienting many other teams. Now, let's get through how we do all of this work. As a group, what we just discussed is an enormous spread and span of technologies, problems to solve, areas for adoption, product generation, companies to scale, and so on. We can do this by working together and collaborating both in the short and long-term scales. When you think about these technologies that we use today, they stem from scientific improvements by expanding what we know and discovering more about the universe. We're able to grow our knowledge tree, we're able to increase the knowledge that we as a species are able to harness. Then we translate that knowledge into capability expansion. We use technologies to shape and reshape our world and reshape ourselves and that capability expansion is the separate process. But these are really two sides of the same coin. They're really an integrated process and separating them in our minds and our societies has rate limited our growth here. I tend to talk about this innovation chasm between these two where science and academic credit are orienting the knowledge expansion side and technology development and product development and corporations and the capital structures are orienting the technology build out. That just falls out of the level of resources that you need to create improvement in one of these. In centuries in the past different incentive fields operated, for example 500 years ago, knowledge expansion was led primarily by curiosity and not by credit. Now this chasm in the middle that we find, we need to find a way to bridge through it either by creating a new incentive field there that can bridge projects in that area or by dragging some of these incentive fields and connecting them. So think of this R&D pipeline going from early research to fundamental development to productionizing some early prototypes to getting to some product that can actually deploy the technology at massive scale and then from there growing it and starting to get it to larger and larger levels of scale. This is what we mean by the R&D pipeline and it spans a 10 to 20 year life cycle for tons of projects tons of technologies. This involves tens, hundreds, thousands of people per innovation. This involves tens, hundreds of organizations per innovation. And so if you want to deal with a system at this scale, you have to do it in a network oriented environment. So this is the pipeline diagram that we are orienting with and that we now have great shirts for. By the way, pick up a shirt while you're here. It's a pretty good pipeline shirt. This pipeline has this rift in the incentive structure yields different ways of organizing people and groups. And because of these incentive fields, you end up with very different types of organizations. In one side, you have corporations. On the other side, you have universities, research groups, nonprofits. And the type of funding is very different. You have for-profit equity sort of dominating the the corporation side and you have individual philanthropy and donations in the economic credit side. And that in the middle, there's a lacking incentive field. This is another way of visualizing the Casem. There's not a lot of donations happening in that fundamental development spot. And that's why this thing is broken. So that's the area where we need to build better systems. So how do we do this? How do we cross that Casem? How do we bridge that layer? So we're going to do this with an innovation network. The way we are orienting this is to, instead of thinking about it as a single company where a group could go and find an amazing business, scale it, which is the traditional Silicon Valley perspective, create a particular business, scale it, get a massive cash flow and then orient it towards solving all kinds of problems, we're doing this with an innovation network. When you have the span and breadth of technology set or mission set that we just discussed, it's intractable to do it as a single company. It's intractable to do it as a single organization. And we have lots of examples over the last 100 years of that failing. One of the things that really showed it to me was sort of comparing alphabet and YC. Alphabet is sort of the height of that approach, of that traditional Silicon Valley approach of organizing capital and resources and projects in one organization. And YC totally ate its lunch. YC was able to organize similar numbers of people with a much simpler structure and has actually transformed R&D at a much larger scale with a fraction of the capital. And that shows you the power of a network-native structure. If you build networks, open, permissionless environments where groups can freely associate and vaguely align on goal sets, you can have a much higher scale type of success. We've seen this happening in crypto networks as well. Crypto networks have created these highly innovative environments. They're also innovation networks in their own way. They're very different structure, very different kinds of systems. So fundamentally, this, if you kind of approach and think of this as a single organization, that's just going to have a bad end game and a bad structure where eventually it's not going to be able to scale beyond a set. But if you approach this as a network, if you approach this as a group of people, a group of organizations, a group of systems, working together and loosely aligned, loosely coupled, sharing resources when it makes sense, sharing goals and projects when that makes sense, we can succeed together in a way, in a vastly larger scale than single organizations. So in order to power this kind of network, we can leverage the equity model, the very successful investing structure that Silicon Valley pioneered of venture capital and use it to fund lots of startups across these missions and then translate that success into early R&D to accelerate the pipeline. If we can create this loop where not only are we supporting and creating and succeeding with lots of startups and we're solving lots of problems by doing so, we're able to drive some of that capital location to that chasm part where it's extremely difficult to invest, where lots of groups can't deploy capital. And so this is the kind of network native funding system that we're developing, mixing traditional venture capital structures with these new impact capital or network capital structures. We're creating a whole range of ways to interconnect existing systems, existing fund structures, existing donors in this funding structure that can allocate and route capital towards that to the chasm. So this is how we solve that key problem. A way to think of how PL fits in this landscape is you can think of the YC group and the YC network as sort of primarily focused in the area of productionizing when you already have something that might be a very successful business or you have already like an MVP or potentially already have an early product with traction and they help you accelerate it to get to the next stage. And YC is closely interconnected with the rest of VC and the rest of venture capital. The way that we think of it as PL is to, because that system is working really well and doesn't, and in general, well, you can of course improve it, the bigger problem that we're focused on is earlier in the chasm. And so we want to create this connective tissue across this entire pipeline to orient and support teams across all of these stages and leverage the capital flows to be able to do this R&D much faster. So this is the whole pipeline and one key differentiator here is the incorporation of network native service teams, service providers that are able to support lots of teams in their journeys with common functions, common features and those service providers can support and grow lots of these missions without having to replicate all of those processes within every single organization. So at the end of the day, Protocol Labs is here to drive breakthroughs in computing to push humanity forward with a very ambitious set of missions. And we're going to do it together as an innovation network, spanning the entire pipeline with an amazing group of people, teams and network built upon open source systems, open source tech, open source coordination mechanisms, leveraging the crypto and VC business model to create, support and grow world changing projects. So that's what PL is about. And the rest of the day, we're going to talk through all of the impact that's happening across the network. We're going to talk about the infrastructure that supports those teams. And then we're going to look ahead to next year. Thank you. Hello, everyone. I'm Molly McKinley and I'm excited to introduce PL projects. Protocol Labs is an innovation network that starts, supports and grows, break through computing projects at the foundation of the Internet. These projects power a thriving network of companies and teams across Protocol Labs, working on everything from decentralized storage to randomness beacons to Web 3 compute networks. Today, we'll hear deep dives on IPFS, Filecoin and more. But first, I want to highlight four other projects at the foundation of Web 3 innovation and development. LibP2P is a modular networking stack of Web 3 used by many blockchain networks and peer-to-peer applications for content discovery and routing. This year, LibP2P powered networks have grown massively. LibP2P is now powering over 250,000 IPFS nodes, 3.3k Filecoin storage providers, 12,000 Ethereum 2 consensus clients and 13,000 Polkadot parachain nodes. LibP2P is also the networking layer of over 20 other blockchain networks like Optimism, Celestia and Polygon, collectively securing over 300 billion dollars in market cap. Bokel Yao is the IPFS of compute networks, enabling users to distribute compute jobs to run where data is stored instead of in centralized cloud data centers. Bokel Yao is built on IPFS and Filecoin and designed to work over distributed data sets and networks. Over 30 organizations are building with the compute over data working group, including Boink, Prelinger, Fission and Ceramic. Bokel Yao already powers a large community of compute over data projects in the protocol labs network, such as Expanso, an edge computing company designed for enterprise clients, WaterLily, a generative AI NFT smart contract on FVM and Lillipad, a Filecoin layer to compute network building on Bokel Yao, FVM and interplanetary consensus. WeatherXM is the world's largest and fastest growing weather network. With WeatherXM, station operators around the globe contribute local weather data in exchange for tokens and customers spend WXM tokens in exchange for weather data and services, such as more accurate localized forecasts. Today, WeatherXM stations have already contributed more than one million station days of data across 71 countries. TableLand is an open source decentralized cloud database, which pairs real-time data availability with a Filecoin vault for preserving historical and archival application data. WeatherXM, among many others, use TableLand to trustlessly categorize, store and make available all of their weather data on Filecoin. These are just a few of the many projects being built in the protocol labs network. Stay tuned for more updates throughout Lab Week on their awesome progress and adoption. Now, we're here about IPFS. Hello everyone, I am Mosh and I'm here to talk IPFS. IPFS is a long-term project to improve the internet's building blocks. It organizes data with content identifiers all the way down and shares it across peer-to-peer networks. First released in 2015, it has a long and storied history of making the web more open and more resilient, securing digital human rights. Today, the project has a lot of surface area. We have a protocol, 12 implementations and a huge community. It's getting accepted bit by bit into major browsers and web standards and the huge public network is a decentralized CDN for the world, resilient, censorship-resistant with a quarter million nodes and over a billion CIDs published to the world. And it's getting better. Exciting features are coming soon to push work out from the gateways to local nodes and browsers and make the whole network more efficient. Now, IPFS is modular, so people remix it to solve all sorts of different problems. In the data management category, there's snapshot voting, scientific data, distributed compute. Starling is doing very important human rights work, verifying archives and content with CIDs and putting them on chain, submitting them to international governing bodies. Blue Sky using IPLD has two million users now in their open social network. IPFS is used for resilient networks from censorship resistance to space satellites, content platforms, anytime you're passing data around, IPFS can probably help you do it better. Now, I want to highlight some big changes in IPFS this past year and give you a sneak preview of what's next. Hang on. The Go IPFS repo isn't loading. Oh, it's because we renamed it. It's called Kubo Now, and here's how that happened. A little over a year ago, we said, hey, IPFS needs to be available everywhere to any developer, to any user. But let's say you're putting IPFS on a mobile device. You need very different behaviors, networking, battery consumption, et cetera, than in a gaming machine or a research cluster. So Go IPFS changed its name and we stopped calling it the reference implementation. And instead, we committed to a many implementation strategy built by the community, driven by the community. And it worked. It's not just languages. Many of these are optimized for certain environments or verticals. Mycelli is taking IPFS to space. Rainbow for gateways. Back to Lau, as you just heard, is IPFS for decentralized compute. iRow is live and they're hosted service launches next week. We have Unreal and Unity gaming engine plugins, IPVM implementation called Homestar. We said we wanted a Cambrian explosion of different implementations for different use cases, all focused around the organizing principles of IPFS. In one year, we got one. Yeah, let's hear some noise for all the teams and people that made it happen. So it's great timing for what's next. For IPFS and libp2p2, we are setting up independent software foundations, open protocol governance. Teams currently inside PL will become independent entities and public goods funds will fund the work of building these public goods. Most of this should be set up by the end of the year so we can start 2024 with a bang. Now, what can you do with and for IPFS and libp2p? Use them. Solve all kinds of data and networking problems. Now that there are so many flavors, you really have no excuse. If you want to jump in the discussion forums, open up a conversation, share some ideas, share what you're building. If you want to make the protocol designs better, hey, PR the specs repos. If you know groups who might want to join forces and fund these public goods, talk to us. Two events where you can find folks this week because there's so much more happening that I wasn't able to cover in such a short time. Libp2p Day is on Wednesday and IPFS Connect is on Thursday, both right here in this building in Istanbul. We hope to see you there. Hi, everyone. My name is Colin and I'm delighted to give an update on the Filecoin project. Filecoin is a crypto powered storage network whose mission is to create a decentralized, efficient and robust foundation for humanity's information. Today, we are proud to say that Filecoin is the scalable data storage layer of Web3. After recently celebrating its third anniversary, Filecoin now powers 99% of total Web3 storage capacity and storage utilization, but that still falls well short of the ambitions for this project. While today, Filecoin is the data storage layer of Web3, the community is actively making it the scalable data storage layer of the web at large. And to accomplish that goal, we've initiated a three step master plan to get there. Step one, build the world's largest decentralized storage network. We have largely gotten there. Filecoin has now has 3,300 storage provider systems across 40 different countries providing 10 exabytes of capacity. That makes it the largest decentralized storage network in the world by far and now bigger than many centralized players in the Web2 world. Storage providers are also supported by top tier global hardware vendors, including Seagate, AMD, Supermicro and many more. Step two, on board and safeguard humanities data. Filecoin now stores 2 million terabytes of client data. That's the equivalent of 500 million 1080p movies. Total clients on board at today has four X'd year to date and has 19 X'd in the last 18 months. This is all during a crypto bear market. Behind that data are 1,850 different data clients, more than a third of which store large datasets of 100 terabytes or more. This includes many recent case studies like powering dark matter research for UC Berkeley, scaling data storage for CERN's Atlas project as well as many Web3 use cases like Solana, OpenSea, GalaGames and many, many more. In addition, we see a flurry of applications joining the network. In just the last three years, Filecoin and IPFS accelerators have graduated over 300 startups who in turn have raised over 400 million in total funding from top tier VCs, which is super impressive. We think Filecoin client adoption is now ready to cross the chasm. As we continue to improve the data onboarding experience, we can migrate from early adopters like research orgs or higher education to clients like Enterprise, SAS and AI. Great. So step three, bring compute to the data to enable web scale applications. This is enabled through the launch of three different technologies, the Filecoin virtual machine, compute over data and interplanetary consensus. Now FEM launched in March of this year bringing smart contracts and user programmability to Filecoin and unleashing the potential of an enormous open data economy. I'm proud to say that since FEM launched, total value locked has grown to $145 million growing at 46% month over month. In a few short months, Filecoin has emerged as a top 25 blockchain as measured by TBL. And not just that, 200 teams are building on FEM generating over 2000 unique smart contracts and recent ecosystem partner launches include Accelar and Seller Bridges, MetaMask and Brave Wallets with decentralized exchanges like SushiSwap and Uniswap coming very, very soon. In 2024, we'll see a large scale compute networks launching on Filecoin, which will enable massive web two style applications to make the transition to web three tech. You can think of this as many different types of L2 compute networks joining Filecoins L1 and optimized for different properties, whether that's privacy, verifiability or performance. There are currently over a dozen teams currently working on solutions in the compute over data working group. Finally, in 2024, we'll launch interplanetary consensus. This is a next generation consensus scalability solution. With IPC, we can move from hundreds of transactions per second to billions of transactions per second in the future. And this does so by enabling a tree of scalable L2 networks with different custom or regional subnets. This ushers in a completely new phase for Filecoin where substantial L2 projects can build on Filecoin's L1 storage capabilities. I'm talking about different compute networks like Lillipad and Fluence and Lumino, to storage and database networks like Lighthouse and TableLand, to retrieval networks like Saturn, Station and Titan. Filecoin will migrate from being a storage network to becoming the fundamental web three infrastructure for planetary scale applications. So that's been our progress across the Filecoin master plan. Step one, build the world's largest decentralized storage network. Step two, onboard and safeguard humanities data. And step three, bring compute to the data to enable web scale applications. Please, please join us in that mission of making Filecoin the scalable data storage layer of the web. You can check out subcessions at Lab Week. We have Filecoin Day all day tomorrow, starting at 9 a.m. followed by Phil Plus Day. The Filecoin Define Staking will happen on Wednesday morning and scaling with IPC will happen Thursday all day. And as an added bonus to end the year, Phil Bangalore is happening on December 3rd and 4th. Thank you very much. Hi, everyone. I'm Joel, and I'm going to give you a quick update on the Ceramic project. So Ceramic is a decentralized data ledger, essentially allowing you to store signed claims in that stations off chain in a peer-to-peer network, while time-stamping this onto a blockchain. So what we've been up to recently is we've been splitting Ceramic into two layers. At the base layer, which is basically what we're calling Ceramic, is a peer-to-peer event streaming protocol. Here, we've been building a new implementation in Rust. We've been implementing a new libpdp sub-protocol called Recon, which is the scales peer-to-peer data replications of expanding and mutable data sets. And we're exposing this as an event streaming API, which you can use to build databases. So CompostDB is the first database that's built on top of Ceramic. It provides you with a GraphQL interface for reading and writing data, but also for defining data models. These data models essentially allow you to create relational graphs of data, and they are public and discoverable and composable. So you can use data models from other applications and we can all build different applications that Compost on the same data. And all the data that's written to Ceramic is reputation because it's signed claims or attestations by users. And by now we almost have one million unique accounts and there's more than 15,000 data models on the network. And to give you a brief overview of some of those projects, part of them they just build pure squarely in the reputation space like Gitcoin, like Disco. And the knowledge space they're leveraging essentially the ability to have clear history of reputation and combining that to manage knowledge better. So Coordination Network is combining project road mapping together with LLMs to just make research planning much faster. Deci is building tools for better science publishing and peer review. There's also social tooling that enables this reputational substrate from Ceramic to just like make social more trustworthy and direct and decentralized. And there's also other protocols building on this data primitive to make data in general more trustworthy. Thank you. Hello Lab Week. My name's Eric Watson and I'm excited to be here today to give you a quick update on the DRAN project. DRAN is one of those computational primitives that Juan mentioned earlier in his speech about how we're recreating a fully distributed internet. DRAN's networks are global and they provide unbiased, unpredictable and publicly verifiable randomness as a computational primitive for many different use cases. You can see here in addition to our global footprint, we also as a threshold network are very concerned about making sure that we don't put all of our eggs in one basket. And so in addition to having a global distribution, we also look very carefully at where our nodes are hosted in terms of their various service providers. And so we are actively seeking new service providers in places like Africa, Japan, Australia and other parts of the world. So if you know somebody that might wanna host a DRAN node for us, I encourage you to have them check out the League of Entropy. The League of Entropy is what hosts all of our nodes and is a voluntary international consortium of institutions that have come together to provide robust public randomness. How robust? Over the last four years, we've secured over $5 billion worth of tokenized assets through our work by integrating together with consensus protocols such as Filecoin. In the trailing 12 months, we've received over 60 billion requests for randomness from our network. And finally, for the last four years, since the inception of the project, we've maintained 100% uptime. Is that pretty cool? I thought that was pretty cool. So I've got an executive sort of scoreboard here for you, for those of you that can check out in more detail after the conference, but I just wanna call attention to four quick hits. The first is, I wanna say a very public thank you to the Filecoin Foundation for offering us a generous grant to allow us to fuzz the DRAN code base. Fuzzing is a form of security audit that allows us to make sure that it's rock solid. Secondly, I wanna congratulate my DRAN teammates on shipping DRAN version 2.0, which is much more modular and has brought additional functionality and features to the product. Third, we're in the process of moving our public dashboard from Grafana into our own dedicated Grafana instance, which means you'll be able to keep us honest on our 100% uptime guarantee. And fourth, but not the least, we will be hosting a League of Entropy Summit here in this building on Wednesday, later this week. So for those of you that wanna geek out a little more on randomness, please come join us. Hi, everyone. My name is Aisin Demirjan. Thank you for being here today. March 14, 2023 marked a transformational milestone for Filecoin ecosystem. Filecoin Virtual Machine brought on-chain programmability to Filecoin, unlocking smart contracts and data-rich applications. The first FVM runtime was the EVM, making it easy for developers to get started, tools like MetaMask, Hardhat, and many more. FVM made many use cases possible. Few examples include perpetual storage contracts from Lighthouse, Web3, CDN, retrieval reward contracts from Filecoin Saturn, compute and generative AI altos. For example, stable diffusion compute contracts from WaterLily generate and mint new NFTs based on the user's prompt. And last but not least, an amazing ecosystem of defile on markets. These exciting applications have been possible thanks to everyone building on FVM. We have seen 25% monthly growth in number of unique projects and teams with an increasing variety of verticals. Already more than 4,000 total contracts deployed, 200 teams building on FVM and 640,000 users with wallets created. As a result, FVM has been accelerating Filecoin ecosystem growth. Since March 2023, total value-lucked has exceeded $145 million, bringing Filecoin to top 25 across all chains. FVM has been helping connecting Filecoin token holders with storage providers, fueling capacity and more storage data to the network. Tens of millions of fill have been staked, deposited and borrowed between token holders and storage providers. Looking ahead, FVM will continue to unleash new possibilities by building new products and expanding our rich ecosystem of partnerships. Uniswap, Coinbase, Brave and MetaMask are just a few examples. In addition, exciting new programming with the futures are coming to FVM, including new runtimes and custom-made programs. Thank you for being part of this remarkable journey. This is just the beginning. Me again, I'm here to introduce the amazing R&D projects in the Protocol Labs Network. Innovation in the PL Network spans the research and development pipeline and helps grow many early research projects into full-fledged production networks. There are a number of projects in the PL Network in early-to-mid research stages where ideas are being developed and de-risked, building into open test nets and nascent new networks. Projects graduate from research into active development, launching alpha releases and attracting early adopters. The Protocol Labs Network also has many products in the active productionization phase, gaining their first 100,000 users and iterating on their development success. And finally, a number of PL Network projects have reached active production with millions of users and scalable business models. As you can see, there are hundreds of projects actively crossing the research and development chasm thanks to the PL Network. We're going to deep dive on a number of these R&D breakthroughs in a moment, but first I want to give a quick overview of a few breakthroughs actively crossing that chasm. LERC is a Turing Complete programming language for ZK Snarks. This list-like programming language allows verifiable computation over private data or in zero knowledge, so you can unlock distributed computation without sacrificing privacy. Filecoin Saturn is a Filecoin retrieval market and decentralized CDN, made up of independent node operators around the world. It operates on top of the Filecoin Storage Provider Network and Blockchain, which provides extremely low-cross storage and efficient decentralized orchestration. For developers, Saturn unlocks hot storage on Filecoin for snappy load times for users all around the world. Ingo Yama is a next-generation semiconductor company creating hardware to accelerate zero knowledge proofs. They build Icicle and Blaze, open-source libraries for ZK acceleration using CUDA-enabled GPUs and FPGAs. Station is a Filecoin worker node where anyone can deploy and fund internet economy jobs, harnessing underutilized hardware resources in exchange for fill. The first station module is Spark, a trustless protocol for testing retrieval performance from Filecoin Storage Providers. The meridian framework allows anyone to build and deploy block reward incentives for new protocols to the station network of nodes. Station is now available for anyone to participate in the Filecoin data economy at fillstation.app. And finally, Lit Protocol is a distributed key management network for decentralized identity, encryption, signing, and compute. It allows anyone to store private encrypted data on the open web and use Lit as a secure access control layer to provision keys. Now, let's dive into the research breakthroughs in FHE with Zama. Hello, everyone. I'm Pascal, the CTO here at Zama. As you know, our focus is on fluid-morphic encryption, FHE, and to make it user-friendly for developers. Our main goal is to see FHE being used to secure data in Web3, but also Web2 applications. We think that FHE is the ultimate universal end-to-end encryption technology that will power the internet in the few years to come. Now, the particular FHE scheme that Zama is using is called TFHE, so why are we using this scheme? Well, because it's the only one that enjoys a very nice feature called programmable bootstrapping, aka PBS. And PBS allows you to compute non-linear functions very quickly, and with that capability, we can assemble functional networks that are input-output equivalent to any given function. And because a PBS is also a bootstrapping operation, it is efficient at cleaning the noise in ciphertext so that they can be propagated further down in the network. One of the most prominent things we have built this year is the FHEvN, which allows solidity developers to encrypt variables in their smart contracts. It's very easy. You don't have to know anything about cryptography at all to use it. We can just use our encrypted data types instead of regular data types, and you can do basically all the usual operations on these encrypted variables. And these variables can also be state, by the way, they can be persistent on-chain. Now, the FHEvN actually comes together with a full runtime framework where validators can jointly decrypt encrypted variables as per the contract, so typically they're required in transactions. And of course, this is transparent for the developer and we use MPC and ZK for that part to make sure that threshold decryption is secure against all kinds of malicious attacks. Now, of course, if it is interesting outside of the blockchain space as well, for instance, in machine learning, Zama is also building a complete stack dedicated to homomorphic compilation, a generic compiler called concrete, a Python front-end called concrete Python, and a machine learning high-level front-end called concrete ML. With concrete ML, you can typically replace a model running in the cloud by a homomorphic equivalent which takes encrypted inputs. Zama is also building a solution for on-prem inference but where the sensitive parts of the model are secured with FHE. And we are also working on confidential training where models are trained on encrypted data using FHE. I strongly recommend that you give a try to the FHEvN or our concrete stack. Thanks for having me. That was the news you needed to start your journey with homomorphic encryption. We can't wait to see what you guys are going to build with it. A very fantastic lab week to you all. Enjoy the event and enjoy Istanbul. Be well, bye. Hello, my name is Boris Mann. I'm the CEO and founder of FISION along with my co-founder and CTO, Brooklyn Zelenka. FISION specializes in protocol-first applied research and software engineering. You know FISION for our edge computing stack protocols. UCan, WinFS, and IPVM. And of course, our little mascots. Thank you. We've been really pleased to see UCan and WinFS adoption, our first two protocols that we've been working on and the influence is all over the protocol labs network and beyond. Some of the teams that are working on this in integration or implemented their own versions of this much like Kubo as an IPFS implementation is IRO, Subconscious, Banyan, Web3 Storage, Capulun, Foulverse, and Functionland. Plus collaboration in standards groups like the chain agnostic standards alliance and soon some new work in the W3C. And last year I stood on stage and I talked to you about starting our work on the IPVM, the interplanetary virtual machine adding computation to IPFS. This week, we're shipping our reference implementation of the IPVM Homestar. It's written in Rust. Thank you very much. It's focused on WebAssembly functions. LibP2P provides a networking layer and it works alongside Kubo and other IPFS implementations to access content address data. Other IPVM implementations and integrations might focus on Docker like Baklyow or be specialized for certain kinds of compute workloads as we work to connect computation all over the open web. Now, I'm also happy to present the everywhere computer. Our journey has always been building an entire edge stack of identity, data, and compute. And we now have a single product that uses all of our protocols on applied research. The everywhere computer is a network of published WASM functions and computation along with payment. You can run your own instance or rent from us, write functions in Rust, JavaScript, and soon any other language that compiles to WebAssembly, and extend it with any UCan enabled service. Like AWS combines a huge network of systems, UCan and a decentralized auth layer means that all of us in the network can mix and match and connect in a decentralized way. There's hard problems still to solve. We're thinking about IPFS-based payment channels, trusted execution environment support, and integration into multiple Web3 ecosystems. These and other topics, I hope you'll join us. We're working on a W3C community group with interest from teams at Intel and Google. Join us this Thursday at our workshop at IPFS Connect to go hands-on and run your own piece of the everywhere computer. Thank you very much. Hello, my name is Raoul, and I'm going to talk about interplanetary consensus, a project that's coming out of R&D interproductization these days. IPC is a scalability framework that unlocks planetary level performance for Web3 while delivering frictionless developer experiences to power a new generation of apps that break Web2 limits. I'm sorry to say that we in the Web3 space have a problem. Web3 won't reach consumer market penetration until it can serve the volumes of the internet and beyond. Amongst other things, this involves solving for a crazy high number of transactions per second, local finality, high-petuning, elastic scalability, partition tolerance, and more. And IPC aims to do just that. But unfortunately, we have another problem. Non-Web3 devs are used to scaling their deployments instantly using cloud infra like AWS and Azure. In fact, with services like Bercel, Firebase, or Netlify, one can deploy entire apps without ever worrying about infra. It's also super simple. And in contrast, as soon as a developer scratches the surface of Web3, they're confronted with a massive hairball of alien concepts and terms. It doesn't help that Web3 itself is extremely fragmented, and new concepts keep appearing practically every day, and the entropy of a space just keeps expanding. This is all very daunting to most people, and severely rate limits the adoption of Web3. The tech itself is exposing its internal wires, and we're asking devs to figure it all out. And more often than not, they get confused and walk away. But there's hope. Subnets are the essential unit of scalability of IPC. They're designed reconciles Web2 and Web3 architectures. Subnets can scale horizontally and vertically by spinning up and closing down instances, and subnet trees can organize users and nodes by geography, latency, quality of service, much like internet regions and data centers work today. And IPC intercommunicate subnets so they can work together seamlessly. And guess what? In Web2 cloud terms, the things I just said translate to elastic compute, load balancing, regions availability zones, service meshes, and message queues. An IPC topology can comprise many classes of subnets, and things like app-specific subnets, geographical subnets, and infrastructure subnets, like databases or caches. These work in concert to deliver highly scalable dynamic decentralized apps. Before we move on, I wanted to send a massive shout out to the Brilliant Consensus Lab team for designing and developing IPC subnets. Now, practically everything about subnets can be customized. Thanks to the FEM, subnets get EVM compatibility for free, and you can deploy specialized logic as wasm actors written in supported languages. You can add syscalls to escape to native land, integrate with GPU, crypto, or ZK primitives. You can use your own native coin. You can bring an entire FEM runtimes. And in the future, you'll be able to bring your own consensus, integrate it with storage, and more. And while the DX is still in early stages today, we're super committed to making all of that simple to use. And all of this is coming to Filecoin pretty soon. We're developing much of this tech iteratively, and we're launching things as we go. This week, we will be launching a hosted IPC subnet called Mycelium to the Filecoin Calibration Network. And you can use this subnet to get started quickly. In Q1, we plan to declare GA, and we'll launch Mycelium on Filecoin Mainnet. And this will instantly supercharge Filecoin with L2 capabilities, and will enable things like compute, AIML jobs, build and retrieval networks, and so on, built by our friends at Fluence, Lillipad and Spark. In M4, we plan to integrate on-chain programs with content-addressed data and to support interpreted wasm actors. And as the tech itself evolves, we plan to bring regional subnets to Filecoin to scale the storage network itself. So that's all from me today, but before I leave, I wanted to plug the events that we'll be having later this week. I wanted to invite you all to learn more about IPC on Thursday. So make sure to scan the QR to register, and see you there. Hello, everyone. I'm Kholib Kanonikov. I'm a team lead in Celestia, and today I'm going to present our first modular network. So let's first dive into what's modular blockchain is. And it all boils down to definition of what monolithic blockchain is. They integrate all the core blockchain functionalities into one single entity. Monolithic blockchains, on the other hand, are a network of interconnected chains where each of them is focused on one particular purpose. More formula, blockchain, modular blockchain, is a blockchain that fully outsourced at least one of its core components to an external chain, which is either execution, settlement, data availability, or consensus. Now let's take a look at it from security perspective. Unlike a normal monolithic blockchain, where in order to verify it, you need to download all of its transaction data and execute them one by one, in the modular world, you can only download a few samples from the whole history and the proof that attests to validity of all the data. This drastically reduces the amount of work that you need to do to verify the chain and enables trust minimized access from your personal devices like mobile phones. It also eliminates the need for trusted intermediaries like RPC providers. Scalability-wise, modular blockchain bring new mind-blowing properties. With data availability samplings, the more light nodes you have on the network, the bigger and more secure block space is. Literally, the more users there are on the network, the more transaction throughput there can be. Besides affording the execution to the roll-ups, we narrowly remove the computation bottleneck from the base layer. Additionally, it brings a limited variety of choices for developers and change their building. They can choose from various execution environments like FEM, EVM, SVM, Cosmos SDK, you name it. You can choose any trust assumptions you want, security properties. You can bridge to any ecosystems. Long story short, you can build whatever. And Celestial Mania is live for two weeks that we're running smoothly and I'm really excited to share this moment with you. Thank you guys, thank you. And this is our consistent roadmap. These old projects are building on us currently and they're contributing to this modular world. And if you highlight, we work with OP stack, we work with Arbitrum, with Manta Network, and beyond that, we introduce a new type of software roll-ups. And I wanna express a gratitude of protocol apps as well because without them, all of this would be unbearable. Modular blockchains need modular software. And our ambition is to build one gigabyte blocks for one million roll-ups via one billion light nodes. Thank you so much. Hi everybody, I'm Brad Holden and I lead PL Ventures. And I have the honor of introducing you to a selection of the amazing startups building in the PL Network. We have 250 teams from the PL Network represented here in Istanbul, spanning the research to deployment pipeline. They're hosting over 40 events. I encourage you to check out the Lab Week website, register for the ones that interest you and connect with these amazing teams. Our network is shaping the future of computing. We have teams pushing the boundaries in ZK proofs like Starquare and LERC, enabling new use cases for NFTs like NFT.storage and OpenSea. And teams working at the intersection of AI and blockchain like Jensen and BagelDB and Deepin and IoT teams like WeatherXM who you heard about and Spexagon. Teams in blockchain security like Anjuna and Cryptosat. Teams enabling new use cases and new ways for people interact in gaming and in the metaverse like Mona and DoubleJump. We have many teams building a decentralized identity like Spruce and Privy, which has powered the recent run-up of FriendTech. We've long supported public good funding teams like Gitcoin, Cariba and Archeological. We have teams enabling new ways for you to interact via social networking like Chingari and CyberConnect. We have teams building compute over data like our compute over data working group and Expanso, which you heard about from Mali. We have teams at the intersection of decentralized science like Deci Labs, Molecule, Deci Foundation. And we have many teams pushing the limits of consensus and scalability like IPC and Zondex, enabling new use cases with their infrastructure like Lava and Fleek and building developer tooling like Lit and GoldSky. And teams in DeFi and Fintech like Magna and Tefra. And now you get to hear from a selection of these amazing companies as they tell you what they're building. Hi, everyone, my name is Justin Malillo and I'm the founder and CEO of Mona. Mona provides tools that transform the internet into a vibrant immersive experience where humanity can connect and thrive. Our vision is to become the foundational substrate for all immersive assets across augmented reality, virtual reality, games, and any other browser-based experience in a blockchain-based future. Today, humanity is spending more time connecting and interacting online than in person. Humans will continue to create, consume, and connect in an increasingly immersive way. We're already seeing this with hundreds of millions of users who are each spending hours daily inside of MMOs and immersive games like Roblox and Fortnite. The problem is existing Web3 platforms look terrible and the experience is just not great. They cost a lot of money and you have to buy land. There's also a lack of sufficient tooling for creators to realize their vision. Existing Web2 platforms that do have good user experiences come with platform lock-in and walled gardens. Hundreds of millions of dollars are spent each year on in-game assets. The reality is if those games shut down, everyone loses their assets. That's billions of dollars in value is lost. Mona's toolset solves all of these problems. With Mona, creators build high-quality 3D assets, upload those assets to IPFS. They can tokenize and authenticate these assets on Ethereum and soon on Filecoin Virtual Machine. They can use and connect assets in a high-quality immersive MMO-like experience at monaverse.com and they can use these assets on any platform or game engine. With Mona, you have true interoperability, composability and ownership of your immersive assets. We are leading a creative revolution. In partnership with Protocol Labs, to date we've awarded over $400,000 to creators in build-a-thons, like the Filecoin Forum and the Renaissance of the Metaverse. We launched the first platform for free high-quality immersive world creation accessible over the web. Monaverse is also accessible via web AR on mobile devices. To date, over 10,000 creators have joined our community who have built and uploaded over 15,000 worlds and immersive assets to IPFS. We launched the first marketplace for decentralized 3D assets. We launched a new SDK, enabling custom games and rich interactive experiences via visual scripting, and we've initiated major partnerships with some of the world's leading artists and communities. Here are some of the next steps we're taking to accelerate the adoption of the immersive internet. We're empowering more developers with Mona's IDE for the open metaverse. We're integrating Mona assets with more games and apps. We're opening up minting of 3D assets, avatars, and worlds on Filecoin Virtual Machine. We're also exploring using Saturn as a decentralized content delivery network in an effort to decentralize more of our stack. We're also participating in more built-a-thons, hack-a-thons, and game-jams, introducing more crypto-economic incentives for creators, developers, and players, and introducing text and image to 3D creation workflows. Oh, and one more thing. Soon, we're providing easy access to create and consume decentralized immersive content stored on IPFS via Quest 3 and Apple Vision Pro, protocol labs in Mona, building the immersive internet together. Check us out at monaverse.com. Follow us on Twitter at Monaverse. You can also follow me on Twitter at JustinMMalilo. And if you'd like to learn more or if you wanna work with us, please feel free to reach out to me via email at justinatmonaverse.com. Thanks so much, everyone, and have a great week. Hi, everyone. I'm Evgeny, co-founder of Fluence, where we are working on a decentralized serverless compute platform. We are aligned with protocol labs in a vision that storage and data are much better with compute, but we also think that compute itself is much better when it's serverless. And serverless is a fastest-growing segment of the current cloud computing market. It's estimated around like $10 billion per year for now, but it's growing like very, very fast. And developers love serverless, customers love serverless, because it gives you this easy way to work with the compute capacity without thinking about physical servers and their limitations. So you get elastic experience, scaling, full tolerance, and a lot of these features. And we want, and our mission is to bring this experience to the web tree, to the decentralized stack. So we believe that important principles of the decentralized compute, it should be full tolerant, it should be verifiable and auditable, it should be affordable in terms of price, and it should be compatible with all kinds of data and data sources. And of course, it should be serverless so developers can easily build on it. So Fluence ecosystem currently is serverless developer platform that brings developers this familiar experience in terms of functions, workflows, triggers, timing execution, and so on and so forth. It's also the marketplace of compute providers, the open marketplace. So anyone can join and bring their compute capacity to a network and monetize it. And it brings a lot of incentives to bring in capacity. There are proofs for the computation that it was right, so you can verify the computation. There are payments and stable coins and a lot of other features. And we are integrating different data sources. So we have IPFS already, we have EVM integration already, we are working towards file coin and integration of databases and web to storage as well because like people already have some data there and they wanna migrate step by step. Great news that I have today is Fluence mainnet is coming, it's very close. We are launching mainnet this winter. So this is what I'm very excited about. Thank you. And I have several call to actions for people in the audience. If your file coin storage provider talk to us to learn how to monetize your compute power, if you build or try deploying your code on Fluence testnet, investors talk to us, everyone else lets connect. And the last slide is, this is us, we've been in a space for a while, we raised money from top web to investors. We have a lot of ecosystem partners and a lot of interesting content to check out. Thank you. Hey everyone, my name is Jonathan and I work on Glyph. Really bummed that I'm not giving you this presentation in person, but instead just recording a quick progress update and giving you a sense of what you can expect in the near term from Glyph. So since I don't have much time, I'm gonna jump right into some slides. And to preface, just in case you haven't heard of Glyph, Glyph has actually matured and evolved quite a bit since its inception in 2019. At first, Glyph was an umbrella brand that represented all these different critical apps and tools that we were building for File Coins made that launch. More recently though, those apps and tools have been combined into one app and we've also put out a unique DeFi protocol for File Coin. And this presentation is mostly centered around updates around our DeFi work. So first, I think it's important just to quickly talk about the challenges. So prior to the launch of the FEVM, which is File Coins user programmable smart contract layer, there was no permissionless and easy way for File Coin token holders to actually put their tokens to work with storage providers on the network and earn rewards for doing so. And on the flip side, there was no easy way for a storage provider to get access to File Coin for use in pledging on the network. And when the FEVM launched, which allowed developers to program custom smart contracts, this created a new opportunity to solve these problems and these inefficiencies with smart contract infrastructure like DeFi. And that's exactly what we intended to do at Glyph. So I don't have time to go through all the bits and pieces of what we built, but you can think of it at a very high level like a lending borrowing protocol built on File Coin. And to give you a sense of our progress so far, in just under six months, 7 million File Coin has already been deposited by token holders and 4.5 million of that has already been deployed to storage providers. And so the growth has been really impressive. We feel like we're doing things that are helpful for the network and we're excited to keep putting in the work. There's a lot left to do. So I wanna just talk briefly about the logic behind the DeFi protocol. In our research, we learned very early on that File Coin token holders really didn't wanna lose their File Coin. And so one of the top priorities of this DeFi protocol is to protect both storage providers and token holders from losing their File Coin. We try to mitigate risk in as many ways as possible. And this slide talks about some of those mechanisms. I don't have time to go through each one. So just looking forward, there's two big things on the horizon that we're excited to put out. One are floating rates, which right now when a storage provider gets File Coin from the DeFi protocol, they're charged a fixed flat fee. And the floating rate will actually be a more equitable rate charged to storage providers based on File Coin's macro economy. So iteration vaults allow a token holder who deposits File Coin into Glyph to specify how long they'd actually like to lock up that deposit. And in doing so, it could potentially earn more rewards. And lastly, there's gonna be a lot of really exciting updates to our website. So keep checking back to see different stats and new features that will arrive over the next few weeks and months. Thank you for listening to this quick presentation. If you have any questions or you wanna hear more information about any of this, you can hit us up on Twitter at Glyph.io. Go on our website where there's links to our Discord and Telegram and you can hit me up directly on Slack. So have an amazing time at PL Summit and I hope to see you next time in person. Thank you. Hello everyone. We're the team that's contributing to protocol Rarimo. That is focused on the potability aspect of identities. And let me explain that. Fast forward, every DAB that gets built on different blockchains will be an identity issuer at certain capacity. And the framework and the technology we will be using and there's a consensus within the community I guess that certain aspects of our identity that will be privates and sensitive will be using verified credentials for storage and some of the more public facing attributes will be using SBTs and NFTs. But if we zoom out, we will be doing what we always done. Fragmenting the identity market due to different standards, due to off-chain methods of verification and due to this kind of different ecosystems and the blockchains. So I think interoperability is an important aspect from day one and that's what Rarimo is. So it's a cross-chain layer that enables any DAB to have instant access to aggregated identity issuers. There's no matter what standard they're using and being able to streamline the verification process on the chain of their choice. Remember like DeFi needed access to the pricing data and the cross-chain liquidity for it to work. We're gonna need the same type of infrastructure for an identities to work at scale to deliver the promise that the end user were given the portability and reusability of our identity attributes across different ecosystems. Four months live and protocol already aggregates and works with different issuers who are working for proof of humanity. And beside that they're using different verification methods and the standards, they're streamlined and accessible on the community building platforms. What does that mean for an end user? So you just need to choose a verifier of your choice and then you can seamlessly without any friction participate in cross-platform crests or cost company reward programs. We push the boundaries of where you can take the interoperability aspect to an end user perspective and our recent collaboration with MetaMask basically merged the two worlds of the private on-chain verified credentials and the crypto world under one single keys and the wallet that we all familiar with. I think this is just one of the few use cases that interoperability brings within the space. And yeah, thank you. Just find me around. Happy to talk about the future of decentralized identities. Thanks. Finance in the next few decades is going to merge with computing in a pretty deep way. Web3 and crypto represent this amazing opportunity to rewrite the rules in how economies and governments work. And so it's a combination of computing and the power of the computing platform. But the finance market is larger, much larger and clearly we are missing our capital markets and as a secure finance, we want to extend this maturity to longer, let's say five years. I realized change needs to happen when I saw a live or scandal where a group of bankers can manipulate the benchmark interest rates that's governing a trillion dollar scale of market. That's the reason why I decided not to go back to traditional finance. What's missing in the defined space is clearly the yield curve which is considered to be a continuous plot of interest rates over time axis. We can build yield curve for any currencies as long as we have strong economic demand. What first impressed me about secured finance I think is the team's level of professionality, level of knowledge. I worked at Goldman Sachs for 11 years. GSR is a trader, investor, market maker and ecosystem partner. I think the future is very bright for institutional entry into crypto. The user won't be just merely seeing one number getting the feedback in terms of what their yield would be given different risk factors and they're able to produce a lot more diagnostics on the success of their lending profile. Secure finance is for many different users from retail to institutional by combining blockchain technology and the traditional finance weekend actually create a better platform and a better world. The future of finance is here. Hello everyone. Thank you very much. So our mission is to democratize finance for all and our on-chain bond market enable you to borrow or lend money from global investors. And today I have two significant announcements to share. The first one is at our first publication of the IPFS and Filecoin textbook in Japan originally published in Chinese. It took me three years to translate and is now available in Japanese bookstore. I'm thrilled to see that it's positioned alongside the esteemed mastering TCP IP textbook. Thank you. And the second one is massive. This is the finally, finally we're going live with Ethereum Mainnet. And starting December 15th we will begin with the global Itaiose for book building. After two weeks, we will launch the world's first ever global truly decentralized fully on-chain bond market, crypto bond market. You can borrow land Filecoin at the best rate. So please join us on December 15th. We're eagerly and welcoming you to meet the DeFi event this Wednesday. Thank you. In a decentralized internet, how do we know what's fake? What's the original? What's been edited? Hello, everyone. I'm Sophia from Numbers Protocol. We are building a decentralized network to ensure a problem for all type of creativity created by humans and AI. With the rise of generative AI, this is gonna be our everyday life for a generative bank. Such as this, an image of explosion in Washington, DC circulating on social media and causing the stock market to deep. And this, pop DJing, definitely a big one, right? And how about these faces? Actually, none of them exist. They are all AI-generated photos. Doesn't this sound scary to you? So we really need a way to tell which one we can trust and which one not. And that's why the digital provenance becomes so crucial in this era to help us combat misinformation and more importantly, to protect the creator's rights. To build the real trust, we need an open, decentralized and user-owned network. Introducing you Numbers Protocol. And Numbers, we support brands to authenticate digital media and help business to build trustworthy services with better security and efficiency. With our provenance solution captured, we are with you in every step of the process from Create, Protect and Publish. To facilitate evidence integration, we offer a comprehensive suite of solutions, including Capture Ken for mobile interface and Capture Dashboard for desktop solutions. And Capture SDK to develop your own interface and Capture I is the widget to implement on your website to showcase provenance data and enable monetization directly on your website. We are trusted by partners across art, music, NFT and Metaverse and AI industry. We are also adapted by award-winning photographers around the world. And we also work with entities like Reuters to documents the 78 days after the US presidential election. So people can see the true photos in the news and rolling stone to preserve war crime evidence so our future generation can access the truth. And to support business and branding different size and link industries, numbers of service is very flexible and scalable. We are GDPR and C2PA compliant. We run our own blockchain and also support multiple blockchains. We are an international team with diverse backgrounds and we are trusted and supported by investors and advisors across industries. I'm Sophia from Numbers Protocol that's worked together to make the internet a more trustworthy place. Thank you. Hi everyone. My name is Alessio Paglini. I'm the co-founder of HexTrust. I'm also the founder of a couple more crypto startups. Two DeFi protocols, one called Clearpool Finance, a decentralized marketplace for credits and loans and Cinex AI, which is a decentralized intelligence for credit ratings on chain. Today let's talk about HexTrust. HexTrust is one of the leaders in digital asset custody. We were born around six years ago in Asia. Our home originally was Hong Kong. At the peak of the market, we used to have more than $5 billion in assets on the custody, a little bit less today. And we are an institutional player with around 250 institutional clients all around the world. We have been funded by a number of blockchain foundations. Actually, I just noticed that Protocol Labs is not in the list of our investors, which is very bad, but they were one of our series B investors. We raised the date more than $100 million in funding. So what do we do? We basically serve institutional investors, so basically all those guys that play in finance in the blockchain world, digital asset organizations, DAO foundations, et cetera, and digital asset service providers, which are the ones providing services to institutional investors, so exchanges, brokers, liquidity providers, et cetera. What do we do? We provide basically the services that an investment bank would provide in traditional finance. Originally, we started as a digital asset custodian. Today, we basically provide a platform that gives you access to DeFi, staking, et cetera, as well as all the global market services, OTC, on-ramp, off-ramp, market-making, some structure solutions. And it's also important that we've just launched our asset management business with our venture fund. So whoever is interested in funding, we are just ready to start deploying. Lastly, we just don't do financial services. We also do some other stuff, some kind of specialized, tailor-made solutions for blockchain players. In this case, I wanted to bring a couple of examples. Together with Ripple and one of the Taiwanese banks, Fubon Bank, we developed a platform for the E-Honkong dollar, the CBDC in Hong Kong, which we just launched last week. And also for Stasis Euro, we basically became one of the first nodes that allows this kind of bridging, on-ramping, and off-ramping for a stable coin, basically the oldest stable coin denominated in Euro. I'll be here for all the events. So if anybody's interested in talking to me, just find me. These are my contacts. And thank you all for being here. Hello, everyone. My name's Matthew Fralick, and today I'm excited to introduce you to the movements portion. Before we start, what is a movement? A movement is an organized effort to promote and achieve a shared goal. This is bigger than any one individual or entity. Protocol Labs generates breakthrough innovations ourselves, but we also create impact by participating in broader movements. Two movements that we'd like to highlight today in particular are Decentralized Science, which is driving permissionless and open innovation in research, and Public Goods Funding, which is building better systems and mechanisms to better fund our collective goods and commons. Protocol Labs contributes and partners with some of the leading projects in these spaces. While this is just a small section that we have up here, collectively these projects have directed tens of millions of dollars to support thousands of projects that are creating impact in these movements. To highlight a few, we've contributed to Fund in the Commons, a Public Goods event series and community, which is leading the Web3 movement. Hyperserts, a data interoperability layer for impact funding systems. Open Source Observer, a system for identifying and valuing the impact created by open source software and beyond, and also Archaeological, a legal and financial framework for better funding permissionless Public Goods. Beyond that, we also partner with some of the leaders in the space that we're very excited about today. We have Gitcoin, who has been a pioneer in the Public Goods funding space for years. VitaDow, which is a pioneer in the longevity and decentralized science space as well. And Deci Labs, who is creating a decentralized platform for the science process. So without further ado, I'd like to introduce a few of these projects up next. Hello, I'm David Casey from Funding the Commons. And today, I will be giving a short summary overview of the status of Public Goods funding in 2023. So traditional methods of funding are alive and well, unaffected by the market fluctuations of Web3. This includes academia, government, and nonprofit. Within Web3, there are both decentralized dows and forums, as well as more traditional ecosystem grants. Retroactive funding is growing, led by Optimism's retro PGF and Hyperserts, as well as DRIPS dependency funding. Crowdfunding is evolving, both donation-based and token sales. And Venture Capital in specific cases does intersect with Public Goods. The landscape is shifting. There is less funding available today from Public Goods than there was a couple of years ago, and it's more specifically targeted to particular ecosystems, in other words, network goods. However, there's growing interest amongst protocols across Web3 in funding Public Goods and increasingly sophisticated infrastructure built to fund them that's open source and being deployed across the EVM world. So without further ado, I'll give you a little summary of Funding the Commons itself. Funding the Commons is a protocol lab's incubated event series that's been going strong for two years now with seven events, five in-person and two virtual, with over 2,000 participants and hundreds of speakers, whose talks are recorded on IPFS, by the way. The event brings together builders, funders and academics across Web2, Web3, philanthropy, government and non-profit. We're bridging Web3 to the greater world of public funding. So what is our focus? We're building an incubation ecosystem for Public Goods funding infrastructure by connecting developers, Web3 protocols and teams and funders, both philanthropic and Venture, to develop and deploy Public Goods funding infrastructure and use that very same infrastructure, run funds through it and fund the next wave of builders and so on. So what have we been up to this year? We did our keynote conference in Paris at the Sorbonne University, another conference in Berlin in September. We also ran a virtual hackathon for Public Goods funding infrastructure. I'm really excited to tell you that we ran our first residency for open-source builders of Public Goods and we hosted 40 builders for over a month in Berlin and we're really excited to keep going next year. We'll be here in Istanbul, participating in two events on Wednesday at Depact and Thursday morning at Shelling Point. And to finish the year, we'll be running Funding the Commons Taipei on the 9th and 10th of December. So what's next? In 2024, we have two conferences planned, one in the San Francisco Bay Area and another concurrent to Ethereum DevCon in Southeast Asia towards the end of the year. We'll continue incubating Public Goods funding infrastructure projects through conferences, residencies, hackathons and working groups. So these are some ways that you can get involved. If you see me around, I'd love to talk. Thank you so much. Hi, everyone. I am Sophia Do and I'm excited to be here with you all today to introduce Hyperserts and its role in revolutionizing impact funding systems. Hyperserts are an open data layer for tracking and rewarding positive impact. A Hypersert represents a unique impact claim people care about and wish to fund before, during or after that impact occurs. This is important because funding impact is hard. Right now, what we value does not equal what is profitable. In our current system, funding flows towards projects with greater shareholder return rather than projects that create a lot of value. This is because mechanisms for funding impact such as grants, bounties, prospective funding are not only inefficient and risk adverse, they're also severely fragmented. That's where Hyperserts comes in. Hyperserts creates this interoperability by serving as a single open shared decentralized database for impact funding mechanisms. At the foundational level, a Hypersert is an ERC 1155 semi-fungible token whose ownership is fractionizable and transferable. It contains information on who did the impact, what they did, and when they did it. A year ago, Hyperserts started out as just an idea on how we can create feedback loops for funding public goods. Since then, we've launched on various networks collaborated with different organizations, released tooling and piloted different Hypersert applications. Most notably, we've experimented with the creation of hyperboards. These are dynamic billboards that dynamically render ownership structure of one or multiple Hyperserts. These hyperboards can be embedded on websites or shared across social media with a goal to incentivize and even gamify continuous funding for these impact claims. While this is just one application of a Hypersert, we hope to see the wider Hypersert ecosystem use this protocol to build tools, applications and integrations for their specific use cases. The idea is simple, replace slow, inefficient, top-down bureaucracies with fast-moving interconnected funding networks. But this only begins with a collective willingness to experiment and explore with new funding mechanisms. Thank you to all our collaborators for making this possible and please join us in building a new era for funding positive change. Hi, everyone. My name is Clare Tsao. I am the co-founding officer of the Filecoin Foundation. And today, I'm here to talk about one of my favorite movements in the Protocol Labs Network, the Filecoin Orbit Community Program. So what is Filecoin Orbit? Orbit is a network of volunteers all over the world that help host workshops, hackathons and also post educational content about the Filecoin ecosystem. And the Filecoin Orbit Program has allowed us to reach a global community of builders that are fed into hackathons, accelerators, grants as well as venture funding. Since the Filecoin Orbit Program launched 15 months ago, we've already had tremendous impact from reaching 26,000 builders all around the world, having a volunteer army of 150 ambassadors that have hosted 190 events around the world and also reaching 80 different countries. Filecoin Orbit has also helped us reach a next generation of builders. You can see some of the universities that we've hosted Filecoin Orbit workshops at in this slide. Beyond that, we've also worked with incredible teams in the Filecoin ecosystem to customize content that have been shared by Filecoin Orbit ambassadors. For example, earlier this year, ahead of the Filecoin Virtual Machine launch, we worked with the FEM DX team to customize 30 different workshops. And this has led to over 150 projects building on FEM. And so we are so thrilled to be able to work with many of you guys if this is something you're interested in. The Filecoin Orbit Program has also had great experiences for volunteers. So for example, some of them have been able to tap into their local Web3 community through hosting Orbit events and others have been able to travel around the world to talk about the broader mission of why a decentralized Web. And we've also been able to regionalize content through Orbit ambassadors. For example, this is a workshop in Shanghai hosted in Chinese. This is a workshop in Ecuador hosted in Spanish. We've also hosted our first ever Filecoin workshop in Dubai just a few weeks ago hosted by, again, a Filecoin ambassador. We also reached Oxford University in the UK and even as far as Nigeria over here. And so beyond organizing local workshops, some organizers have gone on to host even bigger events. For example, Phil Jakarta reached over 700 attendees earlier this year, our largest event in Indonesia to date. We also had Phil Seoul happening in conjunction with Korea Blockchain Week. And upcoming, we have Phil Cape Town happening in just a week, November 21st, our first event in Africa. And this wouldn't be possible without an amazing Orbit program team. And so if you want to sign up to become a Filecoin Orbit ambassador, scan this QR code, we can't wait to have all of you guys join the movement. Hello, everyone. I'm Raymond Chang and I am the co-founder of Cariba Labs. And along with my founding team here, Carl Servone and Ruben Gonzalez, we are building Open Source Observer, a tool for measuring impact for open source software. We help Internet economies measure the impact of open source software contributions to the growth and adoption of their platform. Think of us as moneyball for foundations and moneyball for open source software. Funding today is kind of like baseball in the 1900s, leveraging outdated methods without a clear understanding of what behaviors lead to success. But understanding how we can make measurable impact outcomes, how we can most efficiently fund them is going to be critical to unlocking the next wave of innovation. We estimate that there's over $30 billion today sitting in ecosystem treasuries that are going to be deployed in the near future. And we want to help them fund this better. The reason is because this is what almost everybody is funding towards. On one axis, highly qualified, retained developers building engaging applications with users that stick around. How do we know that we're funding the programs that lead to real sustained growth and not just short-term blips in traffic? With Open Source Observer, we want to bring visibility into what drives real progress. Similar to how data tools like Bloomberg have transformed how traditional capital markets work, we want to bring more efficiency, more transparency, and better decision-making to public goods funding and impact markets. That is why we are so excited to be working with our first flagship partner, Optimism Retro PGF, which is going to be distributing over 30 million OP tokens. That's over $30 million to 300-plus open-source projects in the ecosystem. We're so excited and so proud to be working with a group of over 100 Retro PGF voters in helping them make better decisions this month in this funding. Open Source Observer makes it easy for funders to understand both on-chain and off-chain impact in an easy-to-use interface. And we are committed to keeping this tool open-source, open data, and open infrastructure with a goal of keeping this data as widely available to the public as possible. One of the things we're most excited about is that with novel programs like Retro PGF and Impact Markets, we now have the opportunity to massively transform how we build and fund digital public infrastructure. At Cariba Labs, we're so excited at the prospect of fueling the growth of orders of magnitude, better digital infrastructure, and human productivity in the broader economy, which is why while we're starting with crypto networks today, we envision this to be a broader goal where we can bring in traditional tech companies as well as potentially government actors. If you want to learn more, check out our Telegram community here at this QR code or follow us on GitHub. Join us in our mission to fund public goods and grow the open-source economy. Thanks. Who loves life? Raise your hand up. Show me how much you love life. Raise both your hands up. Great, great. I'm Lauren Syon, and I'm going to talk about a moonshot effort in the fight against diseases, aging and death itself. So if you want to live, listen up. I would want to confess that you're all now infected with a terminal disease. It's called aging. I have it too. My father has it too. Unfortunately, he recently got diagnosed with terminal cancer. So this is life and death. Humans coordinate to achieve amazing things. So I think we need to wake the fuck up and coordinate to not only go to the moon, but also curaging before this terminal disease will take your loved ones or yourself. And we can do it as a community. The centralized science is simply a way to fund, execute, and publish science without needing permission from anyone. And as a founding member of VitaDAO, where we've shown that an online community of over 10,000 people can come together, align for a common goal, and deploy millions of dollars into aging research, spin-out companies, have one of the therapies be ready for humans. And unfortunately, we cannot do online medicine. We need to do real world stuff. And the clinical trial status quo is unacceptable. But we can definitely do better. And I think if we come together to establish a better regulatory system, that's not monopolistic and paternalistic and results in this invisible graveyard of people dying, waiting for medicine that would save their lives, just like my father. So fortunately, there is an antidote. We can target aging with drugs instead of the status quo of keeping people in a poor state of health for longer. We can actually address the root cause, which is aging itself, and get a real grip on age-real disease. And aging is malleable. We've shown, I don't know about you, but I would rather be the mouse on the right. We've shown that we can reverse or slow down aging in lab animals by dozens of pharmaceutical interventions. And we can exit the status quo. We can move at warp speed, not just for the COVID crisis, but aging is a crisis and heart disease is a crisis. All of these age-real diseases, we should move faster. And we can do that. We've shown that new cities can be built to fix a crisis with science, like Los Alamos with the Manhattan Project. But how do you start a new city or country? Well, Balaji's University came up with this idea to kind of get an online community aligned and then crowd-fund territory around the world. And as an MVP, we also are talking about stepping one order of magnitude farther than conferences and co-living spaces in both duration and number of people. And that's why I got involved to build the first pop-up city, Zuzalo, along with Vitalik Buterin in Montenegro. We gathered a few months ago. We gathered 200 people for two months in a close proximity and had healthy food available inspired by Brian Johnson, exercising together, tracking our biomarkers, doing daily colp lunges and so on. It was quite life-changing for a lot of us. And so up next, we have Italia, another pop-up city for two months in January, February in the Caribbean island of Roatan, where we're going to coordinate and seed this permanent city that is going to become, is going to align to make death optional. It's going to become a sandbox for a better regulatory framework. Remove bottlenecks so that the world can also watch and incorporate what works and throw away what doesn't work. So come, escape the winter in Paradise and check out the website and sign up. Join us. Thank you. Hello, everybody. My name is Edvard Hibbenet. I do protocol architecture at Deci Labs. Together with the Deci Foundation, we developed tools for a radically transparent scientific record. This is something that's riddled with issues today. For example, there is a widespread reproducibility crisis because the publication doesn't include the artifacts that's necessary to reproduce the results. We're also lacking reliable PIDs. We need to have fine-grained addressing of research artifacts. It should also be possible to assign granular credit, not just for publishing papers. To help solve these issues, we've developed design nodes. This is an application where researchers can create persistent publications. That means join the paper with code and data into what we call a node. Then deterministically address any component in that tree. We just released a new version of design nodes last week, and this is how it looks. Here we see the node home for a publication made by a NASA researcher in planetary science. You can explore all of the content of the publication, including the code she used to analyze the data. You can see how it changed over time. We're also building out support for social annotations and attestational rewards. I'm also happy to announce that we just released documentation for a proposed protocol that acts as the infrastructure for the future of scientific publication. It defines an open-state data graph modeling the scientific record where granular contributions are credited and we can build up a solid record of versions. It should also act as the base for a diverse ecosystem of services around scientific publishing industry. We call this the d-site codecs, the collaborative open data exchange. Along with an implementation, along with the documentation, we've also made an implementation on Ceramic and ComposeDB, and we want to build this together with the d-site community and the rest of the protocol labs network. So please give us your feedback on the design, and if you want to help us build these things, we're also looking for engineers. Pop into our Discord, which you can find in the links below. Thank you very much. It is truly inspiring to see the progress across the different projects, startups, and movements within the network. Now, let's talk about how we can support them on their mission. My name is DNA, and I am the Chief Operating Officer for PL Infra. Protocol Labs spans the entire R&D pipeline. This means that our teams span a wide range of sizes and maturity, from company formation to series D stages. Teams also span a wide breadth of technologies, Web 3, AI, AR, VR, and brain computer interfaces. As we scale to include more brilliant talent and frontier technologies into our network, so too must the infrastructure to support their work. When WAN started Protocol Labs a decade ago, there was no institution to support the development of IPFS and Filecoin to be community-first, network-first, open-source projects. PL grew to fill that gap. Now, PL Infra supports all the builders across the network in an open-source, network-native way. We will cover some of the ways we help builders like you grow your projects and startups. We organize our support into five foundational pillars, projects, teams, capital, talent, and programs. The main tool we use is the Protocol Labs directory, a hub for network collaboration and communication. At the root of everything we do within Protocol Labs is projects. Lots of teams and people collaborate on open-source projects, and we facilitate that collaboration by tracking projects themselves, their respective progress, and the teams and people who work on them. My name is Anuj, and I lead product at Spaceport. We are introducing the projects module within the directory to create a landing page and a collaboration hub. Projects can show KPIs, a readme, a roadmap, and the connections to teams and people. Soon, projects page will also allow users to follow and subscribe to receive timely updates. The second pillar, teams, highlights all the groups and entities that build upon projects and organize talent and programs across Protocol Labs. The network now comprises over 600 teams. And this image is just a very small sampling of our vibrant ecosystem. The network directory was created as a central launching point for all interactions across the network. The directory allows for communication, collaboration, and engagement between network members, projects, and teams. I am Ruben, and I lead the Builders Network. For early-stage programs, Protocol Labs runs some of the most expansive and most battle-tested early-stage programming in tech. Our hacker funds alone have attracted and activated more than 60,000 builders, spawning thousands of early-stage projects. To support those builders on their entrepreneurial journey, we also run grant programs and requests for startups. PL recently launched PL Venture Studio. PL Venture Studio validates and develops concepts into new startups with strong founding teams, business models, go-to-market strategies, launch products in early traction. Many stellar startups are developing within the PL of yes, so stay tuned for their launches in the coming months. The number one thing that teams need help with within the network is raising capital. The PL network provides many funding pathways to support development and growth. We build and operate a number of funding mechanisms. Green funds are typical venture funds, including accelerators, seed funds, multi-stage funds, growth funds, and more. Blue funds raise and allocate capital for network public goods, typically supporting projects and teams on the earlier stages of the R&D pipeline. The network connects thousands of funders with thousands of builders, with an approach tailored to the funding type, the maturity stage, and the industry vertical. There are also a few PL-affiliated funds, the PL Venture Studio, PL Accelerator Fund, and PL Ventures. The activity of the builder network can roughly be divided in two streams of work. Builder growth and capital engagement. A great example of capital growth and builder engagement is the PL Accelerator Fund of Funds. Since 2021, the Accelerator Fund of Funds back more than 300 startups. We exclusively partner with the Premier Outfits Inventure. We are looking for the best managers, deep-sourcing pools, and true geographic diversification. We hence run programming with A16Z Crypto, now out of London, Longhash out of Singapore, GPC out of New York City, CV Labs out of Zook, Switzerland, and SBA out of Palo Alto, our birthplace. And we have fantastic ventures coming out of our programming, some of which you've heard from today. Mona, Web3mine, Lit Protocol, BagelDB all came out of our Accelerator programs. Collectively, these startups have raised more than 400 million in venture capital from the Premier Funds backing their growth. If you are interested in supporting these startups, please do find me in the audience. For capital engagement, FilVC is a great example. FilVC is an invite-only investor-focused event that we run once a year. The last FilVC attracted more than 850 investors, managing trillions of dollars in assets under management. I'm proud to announce the next FilVC in early 2024. Hi, I'm Brad Holden and I lead PL Ventures. Since its formation in 2017, PL Ventures has invested just over $100 million into 121 amazing teams and projects across three investment focus areas, Horizons. Horizon One, which is our ecosystem, teams and projects building on protocols launched and developed by Protocol Labs. Horizon Two is new crypto networks or protocols where our experience and insight gives us unique insights into the problems those teams face as they scale and gaps in the network. Teams building in Horizon Three are pushing the boundaries of what's possible with computing in areas like brain-computer interfaces and artificial intelligence. The PL Ventures team wants to serve as a trusted advisor to founders as they navigate their startup journey. And we have played a small role in helping those teams raise over $3 billion in external capital since 2017. And we look forward to continuing to support the innovative teams in our network and funding new teams in 2024. The fourth pillar in the second most common request from projects and teams when the PL Network is talent. PL now has roughly 2,000 members across the innovation network with consistent growth throughout 2023. We have common programs to attract, recruit, and hire candidates into the network teams. The directory allows teams and members in the network to quickly search for and find one another and quickly understand what members are working on and contributing to across the network. They can also see which members are open to collaborate and filter by skills in geographic location. To protect privacy, sensitive information is restricted only to logged in verified members of the PL. Hello, everyone. My name is Ian Brunner, and I'm the CEO and co-founder of IPTS. We are Web3 native recruiting specialists. At IPTS, we have a team with over 100 years of combined recruiting experience that has helped hire 500-plus people into the Web3 space in the last few years alone. We're experts in navigating the complexity of startup hiring for Web3 companies, and I've already partnered with over 70 companies in the last two years alone. We have a very different approach from other agencies and that we offer completely tailored support for each company we are partnering with to make sure that we can have an impact for our partners quickly, but most importantly, efficiently. If your company is hiring or just looking to upload your hiring processes, I'd love to chat. Hi, everyone. I'm Donald Artie, founder and CEO of PL People Solutions, a Web3 people services provider. PL People Solutions is a strategic people HR service provider delivering people solutions, tooling, and feasible consultation to early stage and scaling network companies. At PL People Solutions, our primary focus is ensuring that you engage, grow, and ultimately retain your top talents. Collectively, the team has been doing this work in large tech and startup world for many years, and we're excited to bring that experience and know how to the PL network. Simply put, we reduce turnover. We've already successfully completed projects for various teams in the network, including teams like Mona, Clockwork Labs, and Binyin. Here's a snapshot of some of our work. We'd love to hear from you and explore how we can support your company, achieve great things. So please do reach out. Thank you. Lastly, we offer a number of other programs, tools, and services to support members, teams, and projects thrive within the Protocol Labs Network. I'm excited to announce the launch of Protosphere, Protocol Labs Network Forum. It aims to provide a high signal platform for members to announce, discuss, and collaborate through async discussions. In PLV10, options for breakout groups for specific discussion will also be embedded, along with chat capabilities for synchronous communication. We are eager to see your posts on Protosphere throughout LABIC23 and beyond. Hi, everyone. I'm Andrew, and I lead services for the Protocol Labs Network. I'm here today to talk to you about some of the many benefits that are available to teams in the network. These benefits can all be found on the Protocol Labs Network Portal, which hosts quick access to the PL tools, such as the directory, the forum, the events calendar, and more. This hub, plnetwork.io, is the main access point for teams and members to access tools and benefits from the network. On knowledge, one of the most valuable resources of any team is its people and their knowledge. Protocol Labs aggregates people in the innovation network and it facilitates knowledge sharing between them. We do this in part through office hours, which are extremely powerful. Network members can book office hours with hundreds of founders, execs, and experts directly from the directory profile pages. Office hours help members support one another. They also create opportunities to learn about new and exciting opportunities within the Protocol Labs Network. On marketing, we also want to celebrate and promote the accomplishments of network members, teams, and projects. And we do this by highlighting them in videos, research reports, blog posts, and more. We are very excited to amplify your story to our broader audience across YouTube, Twitter, LinkedIn, and many of our other channels. PLAS is a new program that we started this year. We created it to help service network members across a wide range of topics. In the past year, we've supported hundreds of requests, including introductions, fundraising support, marketing requests, and hiring needs. We continue to improve our SLA for responding to these requests and are very excited to expand this to more and more network members in 2024. Last but not least, what is a network without in-person connection and celebration? The Protocol Labs Network hosts hundreds of events each year. These range from big events like Lab Week, IPFS camp, Filecoin gatherings, to many smaller social meetups, like the monthly Lab Day gatherings that we have in cities around the world. The network page in the portal shows upcoming events so that you can see which events are happening, when they're happening, and where you can gather with fellow network members. I very much look forward to meeting many of you here this week and also at events throughout 2024. Hi, everyone. I'm Cyril from Mosea. Very excited to share with you the PLN Service Provider Marketplace which helps builders from the PLN move faster. So in addition to matching you with service providers, we help negotiate deals for the network and ensure service provider availability. Service provider can range from smart contract auditors to go-to-market agencies as well as dash-ups for companies of all sizes. Here is just a few of the vendors that we have strong connections with for different services. Some of these companies have completed hundreds of projects with the PLN Network. Our goal is to prioritize providers who understand the ecosystem and consistently deliver valuable services. So if you'd like to recommend vendors to the network, please leverage a new forum that has been introduced by Yannouge and we'll evaluate them so they can grow through projects from the network. Thanks. Thank you. As the network continues to expand, the PLN for offerings will grow to address your needs in each stage of your business lifecycle. We're excited to build the future with you. Thank you. All right. There has been an amazing set of projects, startups, teams and PLN for that you can all leverage. I'm here to close us out by talking a bit about PLV-10 next year for the PLN Network. As we talked before, PLN drives resource and computing to push humanity forward. We're doing this with a very large-scale pipeline with a very large-scale network of organizations, people and teams and we have a lot of programs that help support everybody. PLN has been versioning itself for a long time. We decided to change and evolve the org structure through very well-defined versions. We have a version per year. We're currently on PLV-9. Nine years after the founding of PL. And every quarter, we release a minor version. So we're currently on PLV-9.4. You can see this kind of version history over time and you can see it stretch out into the end of the decade. You can think of... As I reflect back on the last nine-ish 10 years of PL, and I'm getting ready for next year, we're going to be celebrating the 10 years of PL, which is going to be pretty awesome. I find myself reflecting a lot on the many things that were critical strands of thought that ended up combining into building PL. There's a lot of time that I spent in the depth of computer science and internet building blocks, navigating lots of internet RFCs, W3C docs, and so on to build protocol standards and build the superpower platform that we all share. There was a lot of time that I spent thinking about and learning from lots of labs and how they organized research, and how they managed research processes and how they were able to achieve breakthrough speeds in terms of R&D development across extremely difficult environments. Huge plug for my favorite in history, Bell Labs. Now, I also learned an enormous amount by going into the startup world in Silicon Valley, and that innovation network and that innovation environment gave me a ton of building blocks by which to shape PL. At the same time, I was very frustrated because in 2006 through 2013 or so, it felt like the world had slowed down dramatically, especially when I compared it to the 1950s and the 1960s where the world was changing extremely quickly when new technologies were being thought of, translated, and built and deployed in a much faster scale. That's what led me down the rabbit hole in thinking through the infrastructure of project generation in general, the capital flows, how governments fund R&D, how Silicon Valley itself achieved success in the production and production scaling part. And at the same time, I explored a lot of open source and a lot of, I participated in lots of different projects, I learned from a ton of systems and so on. A lot of that work ended up building, coalescing into IPFS and Falkoin and that's what gave PL its beginning. But the longer story of figuring out how to organize the large-scale R&D pipeline is really what PL is about. I can plot the first five versions of PL in this graph, won't go through the whole history and saving it for next year, same with the last five years. But one really key thing started happening in the last three years is when we started the transition towards the network. In this time, so let's talk about that for a moment. As I talked earlier, we've shifted from a single company into an innovation network, we started this in 2021, we grew that in 2022, now we're in 2023 almost completing that transition. We have now very large-scale network of participants, lots of organizations building lots of projects in a whole bunch of shared missions, we have a ton of shared services and we're building large-scale coordination systems for both figuring out how to solve problems together, figuring out how to leverage our knowledge and our resources and figuring out how to route various kinds of resources like capital, talent, our energy into our shared missions. So I look forward to, in PLV 10, completing that network transition and finalizing that shift. It's been a huge undertaking with tons of people working super hard for the last three years and that's going to be a super exciting moment. So a quick look back to PLV 9, so this was the diagram I showed last year with the broader network and the CASM, the builders network, the talent network and network services. We grew the network to hundreds of teams spanning the whole network, spanning the whole R&D pipeline. We've worked together on tons of project areas. We, as a group, have defined the industry in many subfields, all of which are exploding, which is really, really cool. And now that brings me to look forward to next year. So this is a similar diagram, a little bit reshaped. We'll bring in the PL infra pillars as a key organizing model for how to think about all of the different programs and services that connect to build the network. We've introduced PLVS, which is a key component to help start and boot new businesses. We hope, right now we're very focused on the first batch there, but as next year goes on, we hope to open up PLVS as a system to support a lot of the other networks and systems in the ecosystem to help build and create new ventures together. And as we look ahead to PL's future, if I think ahead for the next 10 years, and I think ahead to where we want to be, we want to have lots of blue to green teams. So this means thousands of, tens of thousands of teams doing long-term R&D across a huge swath of directions. We want to have lots of service providers supporting those teams, so hundreds to thousands of teams supporting all of these other teams doing this R&D. We want lots of green funds to help support and route capital and resources to those teams. So this is on the order of tens of thousands of angels or hundreds of large-scale VC funds. We want lots of blue funds to emerge. This is an area that is vastly under-built. The large-scale venture capital world is huge and works very well. We need to bring that level of success to impact funding or network funding as we want to build it. And it'd be great to get into a mode where there are similar kinds of structures, tens of thousands of angels doing impact funding, hundreds of VC-sized impact funds in R&D. Now, like we heard earlier, there is a massive amount of public funding happening in the mainstream world already with a huge amount of support from lots of organizations and people and so on. We need to connect that type of funding to the very fast R&D that we're doing in the PL Network. There's a huge swath of supporters for all of the missions that we have. It's just that because our tech has come from the kind of startup world, it tends to be incompatible with the structures and systems that those groups think about. So that's a major focus for us going forward. It will be, how do we bridge that divide? Now, we also want to tap in and interconnect with lots of other networks. We want to be able to have thousands of universities, nations, crypto networks, other innovation networks, NGOs, corporations, cities, communities, all collaborating and using both the tech and systems that we build, but helping us go faster. And in order to achieve a lot of this, we need better alignment structures. We need to improve incentives for collaboration, for individuals, for teams, for organizations so that we can all create more value together and enhance the network. So let's bring it back down to next year. So that's kind of the next 10-year outlook. What do we do next year to get us closer? One piece is to develop a distributed and interconnected system of green and blue funds that can orient and align capital flows to support all of these projects. We're thinking of that as PL Capital. We want to lean into the forums to collaborate, organize, and discuss improvements across the network. We need some kind of venue to develop PL, the way that we develop our projects. We need some kind of improvement proposal structure. We need to create some conference to organize and discuss and refine our system. So that's kind of open source and the building of the network itself. We need to create a formal membership structure for people where we can define the various different types of membership for people and organizations and enable a broader collaboration structure. We sort of have a lot of organizational debt that they're coming from, just our history as a starting office, a single company. We also want to build a PL social hub. So we've learned an enormous amount about connecting with other people through networks like universities and schools and various social groups. Recently, I've been super thrilled to and had a ton of fun participating in what I think are like really cool social networks like Zuzaloo, which is a pop-up city that we talked about. I'm pretty excited about Vitalia. I think these kinds of systems are enable us to connect and align in a totally different way. So an interesting story here is like when I started PL in 2014, I used to run a group house with a lot of friends. And in 2013, we hosted lots of people that are now shaping a lot of the tech of today. That house was a great forge for a lot of the ways that I think and a lot of the way that those people think. We shared lots of ideas. Some of the people that hung out in that house included people like Vitalik. People like Max Sodec is now building science, the VCI company. People like Gloria Deming who was a key mover in the longevity field. Lots of people like Dylan Field who built Figma. Those kinds of environments, those kinds of residential settings enable a type of social learning structure that is surprisingly good and successful at helping people grow. So we want to kind of experiment with that kind of thing in an online setting. Can we create a social hub structure that can give us something like that? So that will be a bunch of experiments in that direction next year. We also want to kind of formalize the PLInfra group that we just heard about. We want to help a lot of teams coordinate and we want to do that in an open way. So instead of just thinking of that as a single team, think of that as a structure that your team can plug into. If you're bringing something to the network and you're providing public goods to the network, leverage that structure. And speaking about Infra directly, we want to create a blue fund to route public goods capital to groups supporting and improving the network itself. So if you have some ideas of how to improve some of the pillars that we've talked about today, you'll have a venue for how to get some resources and funding to help improve the network in some way. More about that sometime next year. We want to level up a lot of the systems that we've built together, that we, that Spaceport runs. We want to lean into the directory and use software to help us coordinate. We want to level up the talent network and the builders network and flesh those out. So now I want to do a quick look forward at the projects that a lot of us are working on. This, and I want to kind of use a frame of mind around aligning our goal sets. So for example, if you think about, you know, the set of, you know, this set of projects pictured here, you can think of how they overlap. You can think of a whole set of goals that they have and they share. You can think of areas where these projects collaborate and work together. And you can sort of plot their goals as vectors and you can think of the alignment as some kind of combination of those vectors. Now, lots of teams inhabit those ecosystems and overlap. So you can think of lots of the green and blue and yellow teams inhabiting that, that set of overlapping ecosystems. And then you can think of, you know, all their building blocks, those systems depend on. So all kinds of other systems and structures that benefit those systems. So our teams overlap across all of these groups and we need better and better ways of plotting out our shared goals and coordinating between them. And of course this, you know, keeps going. There's tons of projects and systems that we are all building and we, that includes organizations who are helping align a lot of this work. So a lot of our teams are working directly on the organizations that help produce this alignment. So one of the things that we need is to kind of shift our mindset of kind of shorter, kind of boundary projects or ecosystems and try to see that layering and that interconnectivity and ideally have better alignment and coordination structures baked into the network. So things that systems that enable us to align our shared missions and visions, things that allow us to track and integrate shared roadmaps, ways of leveraging network-native service providers, aligning incentives across mission-aligned groups. And we need to create funding pathways to connect to those shared goals. Right now a lot of funding pathways tend to follow team boundaries in a particular ecosystem. So they end up creating gaps across all of these areas. And I constantly hear from lots of groups that would love to fund some specific thing, but nobody in their ecosystem works on that and they don't have a good facility sometimes to cross borders across other ecosystems. And one of the things that I'm personally extremely excited about is finding good ways to enable all of the people across the network to work together and learn from each other in that amazing, almost magical university-style setting or open-source-style setting where all of us have been working in open-source or learning through universities. We know how amazingly productive those kind of fluid learning environments are. I want to create that kind of structure across our projects. So now some goals for next year for these projects and again, this is a one-team's view. Actually, here, one person's view into a team of one. One person's view into a set of goals. We heard a lot about Falkland today. I think the key steps around onboarding and safeguarding humanities data is the main focus for the network and now starting to bring compute to the data and enabling web-scale applications. So building L2s and compute over data networks is a key priority. We need to close the gaps in getting to robust retrievals. That's a major priority for the ecosystem to get to being able to build a robust platform. This is kind of like a large set of priorities that I think are super key. Things like getting to reducing the network-wide OPEX, making reliable retrieval, making storage simpler, faster and more reliable. Things like developing decentralized storage on-ramps. So things like web-through storage but decentralized. Improving on-ramps for medium and large-scale data. Getting to onboard a lot of paying users to the Falkland network. Building high-value applications across Falkland. So this means infrastructure applications like compute over data networks, CDNs, databases, object storage or data generators. So things like sensor networks like WeatherXM or imaging networks like Spexy. Or later on applications like data science, platforms, games, social networks. I'm super excited about those kinds of things. I think the layering and the infrastructure in Web3 is not quite ready for a lot of these yet. Like we're starting to see the first good experiments. And maybe those will start taking off. But I would sort of expect that maybe 2025 is the moment when suddenly a lot of these pieces snap into place and you can have the first social networks hitting billions of users in Web3. But it might be 2026, unlikely that it's 24. Now, one of the things that a lot of us are working on is getting to these high-value compute heavy networks. So compute over data networks, build a cell to send up a Falkland. And a key thing to allow this multi-project, multi-chain, multi-network world to work is we need extremely good interop, bridging between all of these systems and we need extremely good developer experience. Those areas are going to be a huge focus for a lot of us. Now, in the IPFS project, we saw these amazing network stats. Key priorities that some of us are working on are establishing the Interplanetary Foundation, establishing the IPFS Core Fund, fundraising for a lot of these systems, grow core developer community, and grow adoption. You'll hear a lot more about all of this at IPFS Connect. For LipitV, which is a huge building block for lots of groups, similar kind of story, established LipitV Foundation, helped grow the project and set it on much more structured footing, establishing a LipitV Core Fund to help develop the ecosystem, fundraising for a lot of these systems, and then growing core developer community for these projects. And then, of course, if we do all that well, then we can grow adoption. If you want to hear about all of these plans, come to the LipitV day. And now I wanted to take a little bit of time to reflect on Web3 itself. So a lot of us have been working in this field for a long time. I've been pushing on this for the last 10 years. It's a long, long time. We all started aiming to build this much better infrastructure by bringing trust to the network and making it verifiable. We've made tremendous progress on a lot of pieces. We have built up an entire financial infrastructure just out of computer code, the Internet, and economic and social structures, which is pretty amazing. But there's so much more to do. Our goals as a movement are pretty large. We want to get to embedding those digital rights in the network. There's an enormous amount to do. And what we have to figure out is how to scale the systems to get to those layers. So we're not going to be able to get this, unless we solve some hard problems building these kinds of concrete systems. From my point of view, the biggest blocker here is scalability. So Web3 systems scale to some degree, but they're very far away from proper web scale. We haven't yet gotten to cross the adoption chasm. So from my perspective, this is kind of key movement goals across all of Web3. First off, complete the platform. This means get scalability to reach cloud and web levels, build fantastic developer experience that reaches mainstream developer experience quality. We are not going to get through success if people still have to learn for years before they can build a very basic app. It should be like you should be able to build a classic to-dos app or a social network in a day, not weeks. Then in order to get there, we need to coordinate. There's a lot of shared projects and a lot of wasted work along the way. If we kind of line on shared goals, build roadmaps together and kind of de-tribalize, we're going to be able to do this a lot faster. There's a lot of negative tribalism in Web3, and it's getting, I think it's not the worst it's been. It's definitely been the worst, but I think it's holding back lots of groups and lots of efforts, so if we can do that, I think we'll be in a much better trajectory. And if we do that, we can then use our structures to upgrade economies and upgrade governance. There's a lot of good experiments happening, but we need to show that they work, we need to demonstrate, we need to measure and we need to prove that these things work well and work well at scale. If we can do all of these things, we're going to be much closer to being able to deliver on the promise of Web3 and really scale to billions. We're about somewhere between 10 and 100 million users at the moment as a whole ecosystem, and so that means we're pretty close to billions. We're not that far away yet, though most of those people have had very little interaction with Web3, so first we need to close that gap and really reach billions, but beyond that, we need to enrich all of the normal applications that they use day-to-day with this kind of variability. So we need to realize a vision. One message for all of you is, computing transitions take a long time. The cloud transition took about 30 years. The hypertext transition took 40 to 50. That was a much earlier time. The Web2 world with social networks and so on started in the early 90s. You can even argue it was kind of late 80s. So these things can take time. It goes slowly for a while, and then at some point you get enough of the infrastructure in place that you hit an inflection point, you scale really fast, and then the transition feels really, really quick. There's kind of like a pre-iPhone and a post-iPhone moment, but there were a ton of devices and work in R&D that went into building the components that eventually were pulled together into something like the iPhone to then create that transition moment. And so we've been going in that direction and in that exponential curve. But there's a lot of important work ahead to complete that. Cool. So with that, let's talk a little bit about Lab Week and then we'll be done. Lab Week is an old tradition that started a long time ago. We've had a lot of Lab Weeks around the world. An enormous amount of fun has been had. An enormous amount of great relationships have been built. So I hope that all of you that are participating in this week, whether here with us or remotely, get to do that. Last year, we had Lab Week 22 in Lisbon, and that was an amazing event and experience. I want to talk a little bit about Lab Week 23. So a lot of you have been, first off, a huge thank you to all our collaborators and sponsors that enabled all of these events to happen. There's a huge number of events throughout the rest of the week, and I hope that those enable you to collaborate and coordinate as groups really well. We hosted Lab Week 23 in connection with DevConnect this year, and so this is a map of roughly where various venues are. There's a lot of people in these communities super interested in the future of computing. So this Lab Week is oriented towards lots of these subsets and sub-bockets. There's lots of events dedicated to each one of these. We're here at Peel Summit today, so Peel Summit is almost done. A plug for Founders Day, this was a huge, a very useful thing for a lot of the founders in the network. And I want to refer you to the schedule on the site. You can see tons of events. Plan your week well. Of course, you can't go to everything. You have to choose, that's okay, that's life, that is, these kinds of events. Just find the things that you really care about and you want to spend the time on and go to those. And remember, all of this is, or most of it, is recorded, so you can catch up with it later on. And of course, being in person enables you to have lots of relationship-building moments that are hard to do online. One plug, though, there are a lot of people following Lab Week remotely, so we have some virtual programming for folks. Huge kudos to Mona, who built out the whole remote track. So there's going to be a virtual opening party, the Monaverse tour, fireside chats, and panels. So check those out. And yeah, there's really awesome rooms and we'll be using some of the rooms that were built out over the last few years. And I want to plug in some telegram groups. So there's Lab Week 23. So there's a QR code there that you can grab to join the Lab Week 23 telegram group, use it. And there's a PL social hub for folks in the network. This is one of our experiments in creating that social connectivity. So this is meant for folks that have been in the network for a while. Now I want to end with a huge thank you to all of the people that came together to pull this off. This is a gargantuan effort. It takes a whole network to put on a Lab Week. So I want a huge round of applause for all of the groups that supported us. So we're going to clap twice. There's one huge round of applause for all these groups. First, thank you to the people in the city of Istanbul for hosting us. It's great to be here with you. I've already gotten to meet a lot of folks from communities here and it's been really exciting to hear about the things that they care about and they want to learn and so on. A huge thank you to all of the Lab Week 23 organizers and staff of the event. It's a huge undertaking to put this on. So really, really, really thank you. A thank you to the organizers of all the sub-events and the staff of those. Again, a huge thanks to all of the sponsors, collaborators, contributors, and really thank you to you, all participants involved. And if you're watching this in the future, you also get to benefit from Lab Week 23. So even though we're not synced in time, I hope that you're having a great moment wherever and whenever you are. With that, we're concluding the PL Summit. What we're going to do next is we're going to invite all of the speakers to come up and we'll take Q&A. So we'll do this open Q&A format the same that we did last year. So all the speakers, if you don't want to participate, that's really fine. Just we're going to invite everybody up. We'll have four mics, two mics up here, two mics down there. We'll pass them around and ask questions and so on. And then after PL Summit ends, a huge plug for the opening party. This was a massive hit last year. So I think you don't want to miss that. Cool. So I'll take a one minute break and then we'll start Q&A. Thank you very much. Has anyone, microphone, awesome. Has anyone thought of any questions for any of these phenomenal speakers about the impact they've created, the infrastructure they've built, the other fantastic things they're doing. We'll send a mic to you if you raise your hand. My mic on. Can I get my mic? There we go. And also for speakers up here, you can ask questions too. So just because you spoke doesn't mean that you don't get to ask questions. So we're going to need someone to track hands. There's one. All right. I have a question. I'd like to ask this of everyone individually. No. So I'm really excited by having all of these social systems set across. I'm Danny O'Brien. I'm from the foundation. These social systems connecting everybody together. I'm really excited about many of the tools and stacks people are building from LibP2P to the... I'm going to mispronounce this, but homomorphic encryption, which I think was the most exciting thing I saw. I'm really excited that people are building on things like fissions, UCan, and IPVM. The question I have is like, this is amazing stack of all of these cool toys. But how are we going to coordinate to use them? Which bit of what is being built here is going to let us connect all of those together to make tools that grow on this stack. And who should I talk to about helping make that interoperability work? Who wants to take that? Who's going to volunteer? I have some answer, but I would love to hear other people's. Yeah, you're taking it correct? And then I'll take it correct. So I think the challenge of a protocol first building is that there's a tendency to smooth the edges of just the protocols and not pop up back to the application layer. We've seen this happen in the Ethereum network and the EVM. And in fact, in some ways it's getting worse, where core devs go to a certain layer. And I do a lot of work with accounts and wallets. And that wallet layer has ended up being the one that has to understand the lower level stuff. And user and developer needs. So I think what we have to do is we start have to getting opinionated about having stacks that work together and had everything from a continuum of built green teams to protocol teams working on getting feedback up or down. So that's more like a prescription. I'd say the call for now is getting really critical on building tiny little loops. And my bet for that is accounts. We need to have systems where everybody who launches something can lean into the user base of everyone across the network. And that is the starting point. Can I get a login for the PL network so that I can build things on? People log into the PL network and then they'll get a little you can thing that I can connect to other things. That's a cool idea. You should talk to Anuj about that for the directory. Excellent. Thank you. I was going to give a slightly different answer, which is a plug for things that I'd love to talk to people about. Colin and I have been running the ecosystem working group and the Andres working group in the PL network for the past year and a half or so. And we're looking to potentially evolve them next year to be better like cross-coordination working groups. A thing that I've been thinking about in the Andres space is having different kind of sub-working groups focused on cross-cutting domains where teams can get together and talk more about cool. We're the storage and retrieval working group. We're working across IPFS and Filecoin and storage platforms and Lighthouse and all of the FBM builders that are trying to build applications on top of storage and Saturn and retrieval incentives. And that can be a cross-coordination space for many of those different teams to talk about what needs to happen collectively so that we're powering applications even more efficiently. And so that's one idea. None of that is to replace lots of peer-to-peer creating solid interfaces and finding out and incorporating UCANs directly into your product or protocol. But it might be an augmentation space for us to identify gaps and coordinate to patch them. I think it's a mix of things from the perspective of the builder, right? Like we have a number of approaches that we are following. I think there's kind of like a top-down one and a bottom-up one, right? So we have been very much leaning into the bottom-up one where we basically do these green-field hackathons and basically say, look, here's the technology. Play around with it. And that has spawned a Cambrian explosion to Marshall's point of new protocols that we have seen, builds that we have seen, et cetera. I think where we could lean in more and for a while this has been done internally to a large extent. But I think we can and have to lean more in the community to do so is us going out there and shaping really refined requests for startups as to the things that we, as we coordinate, see that need to be built, scope these things out a little bit and put them to the community and say, hey, this is something that would help the entire PL network, PL infra has a perspective here, as Molly says, right, like the people working on IPFS, the people working on Filecoin, and we come together and put out this kind of more, I wouldn't really call it top-down, but these more scoped efforts that then builders can execute against. One more thing I'd add is tightening that feedback loop between application development and protocol design. I think I have a lot of admiration for what the Blue Sky team and the AT protocol are doing, where they design the protocol, built the pilot, got a bunch of users, dozed their own network, almost, and then had to refine things and are still doing so in real time, you know, in terms of weeks and months, rather than years. I think that running a huge decentralized CDN has made IPFS much better than it would have been otherwise. And so the more we can do these tighter feedback loops between application development and protocol design, and I think that means that each side needs to do more, spend more time on the other side of the wall, right? We will end up with better protocols, better technology overall. Yeah, we just had to add that. I think many of these technologies and components are going to market individually, and they're trying to make their case of why they need to exist and why developers would need to adopt them in a kind of like very scoped way. And what's missing is like, you know, from early web-to-development days, things like if people were here in like the Java days and like the spring days, there was the spring framework, right, that unified a lot of kind of like hibernate behind the scenes and like the data access and like databases and caches. And like it was this one framework that basically gave that delivered a unified model to the developer for them to like wire things and very quickly make them come to life, such that developers didn't even need to know the technology that was running under the hood. They just needed to know that, well, maybe I want some kind of like to perform compute on things that are encrypted. First, I'm going to need to think about how I encrypt this. Maybe that's just simply an annotation on code and that piece of code automatically deals with encrypted data and now any compute or any function that uses those inputs or outputs are automatically running with fully homomorphic encryption. Things like that, then the scalability part of what is in as well. We do have like common building blocks that I think many of these stacks are using like CIDs content addressing and so on that do serve as that conductor where I do think we have that thin waste that we can make this unification happen. Somebody just needs to go and do it. Honestly, there needs to be somebody that is like paying attention to all the innovation and also honoring everything that's coming out of research, right? So to Boris's point as well, I think it's not just about things that have reached kind of like productized stage, but also this could be a really good vessel vessel to give developers access to like early tech by not making it about the tech itself, but what what qualities and properties does that deliver to an application stack? So that developers reason about the attributes and properties and the technology just gets wired under the hood to deliver those. Alright, some more questions. Hello, I have I have a question to Juan. ProtoLabs is very successful for the past crypto year. My question is back to the protocol in the AI year. You know, there is a, remember this article from Vitalik is automatic is in the central and the human is at the age. Can you repeat that? Human, human is on the age. So my question is how do you think of the protocol in the year of AI for the autonomous agents, for example? Thank you. Yeah, it's a great question. And I've been thinking about this for quite a while. Let me know if I got this right. So the human layer, you're describing when we develop protocols and systems and so on, the humans building all of these things are a huge component in realizing all of this. How is this going to evolve once we have better and better AI models that are much more capable of doing lots of the work involved in doing this long-term R&D? How does that happen? Is that kind of your question? Great, so I thought a lot about this for a long time. The biggest factor here is that we live in a very volatile time where the timelines are compressing really fast. I remember when I used to show this diagram with AI and so on, even last year, but certainly like three years ago or five years ago, I didn't quite have that slide, but I had other ways of showing it. Just the response used to be incredulousness, be like, yeah, sure, AGI never in a million years, or AGI sure maybe a hundred years from now. I remember the chair of the AI department at Stanford when I was there was like, yeah, AGI is never going to happen. AI is in the huge winter and so on. What we've seen over the last 12 years is this incredible breakthrough busting thing that keeps accelerating and the stuff that people are playing with right now and this year already has an enormous range of applications and a lot of teams aren't even focused on productizing those because they're building the next generation and the next generation and the next generation after that. A lot of the groups that are going to try and productize the initial stuff are based by later generality. We have some of the people in the best labs right now talking about getting AGI within two years to five years. I guess that's an extreme compression of timelines. We were at like 15 year predictions last year and two or three years ago we're at like 15 to 30 year predictions where the bulk of the thinking was. Now this could turn out to be very optimistic. This could turn out to be, you know, there's a lot that might extend the timeline again and so on. But what this means is that when we hit these discrete levels of improvement at each one of those, you'll get capability sets that are similar to what any human can do in some domain and that set of domains is going to keep expanding. So there's already a lot of what we're doing day to day that could totally be replaced by GPTs. Like today. It's just that those GPTs aren't well integrated with the rest of the systems. So it's hard to use them. There's a really cool paper that appeared recently was I think called DevGP or GameDevGPT or something like that. There was a prior paper that did, that combined a set of GPTs and enabled them to play on RPG. And they developed social structures, they like hung out, they like developed work together and so on. It's fascinating paper. The follow up here took those GPTs and put them and gave them roles in a game developer studio. So you created kind of like the product manager role in the designer and engineer and the QA tester and so on. And that was able to like clean a lot of the problems with GPTs which is that they hallucinate a lot and they generate all kinds of things. And this little game of GPTs working with each other to produce some software made a game, like full end to end. And then they started getting customer feedback asking for different features and so on. And they improved the game and they improved it further, right? And this is just GPTs working with each other and this whole thing took like seven minutes and cost a dollar in computer. So that gives you a perspective of what's about to happen to the entire world. And so that means that of course everything that we do day to day is going to radically change by as these new models and these new systems start advancing. It's time that we bring a lot of the AI focus into the field network. I tried hard on this like two or three years ago like the network wasn't ready. We tried again like last year also wasn't quite ready. Now we're like all of a lot of us are already day to day experimenting on all of this because of GPT-4. GPT-4 and chat GPT gave us the tools to now fully use this stuff. I expect that AI is going to be a much larger part of the network in the coming year or two. And I expect that a huge fraction of our work is going to start getting accelerated by GPTs. Really use these as great leverage points where you can level up what you do day to day to stay abreast of the developments that are happening and don't like get left behind because that's going to you'll have people that are able to do 10,000 times more than other people by leveraging these tools. One other side note here the area where we've had tremendous impact on here I want to give huge credit and kudos to David and Evan and a few other people in the network we managed to incubate in our network and support some of the most important work in AI safety which is something that we should all be super proud about and that's going to become a bigger and bigger and bigger part of the conversation and we're in an interesting position there where we know the benefits of open source we know deeply how valuable these open sources are and at the same time we know the concerns with AI risk and so we as a group are going to be very well poised to comment on how should humanity steer these sets of technologies to make sure they don't get captured by centralized parties or a few governments and that they really broadly benefit everybody and at the same time how do we avoid massive scale risks of extinction that's kind of where the problem space is and I look forward to doing that all together over the next few years we can do it we'll figure it out any other questions thank you well can you hear me alright I'm a big fan of let me introduce myself I'm an AGI researcher founder of Celestial Intellect Cybernetics we're automating data science with human level AI I was very impressed by your presentation I'm a big fan of Filecoin really and Zero Knowledge Proof Method it's amazing what you're building here I think the idea of a new YC-like community to accelerate Web3 is a fantastic idea and I wasn't aware of a lot of things here it's very impressive and I love the vision about Filecoin so my question is about this Filecoin vision building new killer apps on this new Filecoin platform do you have a specific accelerator for your funding well research oriented teams deep tech teams that are building that kind of application that's my question okay so the acoustics weren't great but I think what the question was are they specific accelerators that help companies that come out of research and want to develop something accelerate into market is that the question something for the Filecoin Filecoin specific Filecoin specific data science platform which you mentioned the answer is yes and there are several so a number of these teams and partners that I've introduced earlier as part of this accelerator fund of funds if you remember parts of my presentation are running right now so there's one program it's called the FVM Genesis Accelerator running out of Singapore that just closed applications just now but it's kicking off in like a number of weeks so if you want me to put you in contact with that team it might be too late to come onto the current cohort but imagine those as programs where very technical people come into the program and they are shaped into founders right they learn how to fundraise they learn how to create a legal entity how to build a team how to hire and fire etc right how to execute on a product vision etc right so we do have these programs and we do have these programs dedicated to partly Filecoin partly IPFS and others that are more neutral thank you can I request that if you have a question you stand up and form a little line here because in that way when you speak we can hear you better we can at least some of us can repeat the question in addition to just blockchain research we also have a lot of programs working with universities for example Starling Labs which is out of Sanford University has an accelerator that is specifically looking at human rights use cases and so there's a ton of examples depending on what you define as research where we are working with different teams to make sure that you know they can graduate to have things that are sustainable that they can continue to run out of USC we're working to help them turn their library archive into something that universities all around the world can use as a means to store important academic research so again I just wanted to paint different examples to your question to just paint another third example the PL Venture Studio I would say many not 100% but many of the current companies and projects in the PL Venture Studio are specifically Filecoin focused and are building on top of Filecoin decentralized storage making it much easier for other application builders to use and then adding critical new capabilities to the Filecoin network and ecosystem around retrievability around these awesome station Lambda functions around compute things along those lines and so PL Venture Studio is also a program that helps span research through to deployment. Hi Patrick Woodhead looking at the diagram that's actually on your t-shirt Juan we see that Y Combinator was only covering a small part of that whole thing from research to market and I was wondering if it's like a novel idea to have a company that spread so thin across the whole pipeline and have other companies done that before and why is PL kind of identified that as a good way forward and are there any risks it's spreading oneself so thin across the whole it's a great question so all of these groups that have inhabited the pipeline they don't do it in a super well-defined order think of it as a probability distribution or something like that or like a probability mass that is spread out I think why C's is definitely differentiated around the moment when people have their early ideas or an early product or they have product market fit and they're now scaling and they use YC to accelerate a specific part of their journey and the program itself, the YC program which is one part of the network is very focused in that area now of course the broader YC community extends very deeply and a lot of activity happens that is not a structure program so for example the YC alum network which a lot of us here are part of is a huge helps a lot of the community further and further out there's a great pay it forward culture in the YC network that has enabled a lot of the community to coordinate and help each other and so on there are some oriented groups that have formed to support groups later on there has not been as much earlier so there was of course all the start-up school type learning materials and so on those were extremely useful to help people start companies and those kind of targeted the earlier parts and I think there was some amount of work I don't know how successful on supporting a lot of nonprofits as well I don't know how much of this has gone up into very successful programs but a lot happened in the kind of fringes however the network is definitely oriented towards one phase and that probably comes from the rest of the YC and how the rest of the YC works and just so how entrenched the perspective around start-ups and equity and so on are relative to research projects that chasm that we kind of have identified that's kind of like Peter Thiel is here in one terms that's the secret we understand that there's a huge chasm in between and that you can build systems and programs to bridge it and that it doesn't have to be there it's just an accident of history that we ended up here and I don't think that many other groups know about that and so that's kind of like a huge problem that we have to solve and an advantage in the meantime to help support that but fundamentally the mission is like that submission is kind of across the chasm so we should endeavor to enable lots of other groups to support the other thing that I'll mention is maybe Silicon Valley was the network that comes to mind that most effectively covered the entire pipeline and that was not an innovation network that was directed for very long so it was initially sort of directed by Fred Terman who was a provost at Stanford who had worked in the war effort in the Rad Lab in World War II and they brought a lot of developers and they created a he brought a lot of professors and the people that worked with Rad Lab into Stanford and they developed the first iteration of Silicon Valley where they did a lot of government contract work for governments and so they pulled a huge amount of resources flowing into Silicon Valley to develop a lot of technology and do a lot of translation from the labs into companies and that was supported by a lot of the university programs supported by the companies around there and that led to one or two generations of companies that were operating that way and for a while kind of Stanford played this really special role stewarding that coordination but over time that shifted out and started getting becoming a much more emergent phenomenon that was totally undirected and it was happening because of the critical mass of the people there and the success and the ways of thinking so it was a period of time before the internet and thinking was very rare and you kind of had to go to Silicon Valley to really experience it and get it and that's what gave birth to the personal computer and hypertext systems and tons of amazing work. Now of course that connected with all of the R&D happening in universities around the world and the universities played this other crucial role in surfacing lots of innovations but in terms of the translation part Silicon Valley had this next level ability compared to other groups much stronger even than many other groups or eventually superseded Bell Labs for a bunch of reasons so that's I think the other example of doing it end to end in a kind of emergent example I think Bell Labs is maybe another scoped example in a close environment because there was a lot of open-ended innovation in the early part so Bell Labs tended to do a lot of research in the early phases in many many directions because a lot of them could help the phone network but then because of government requirements it had to super focus and just have one business which was the phone business so Bell Labs shelved computing like they had in the 30s one of the first digital computer papers and they were like well we can't really monetize it so they shelved it and they never looked back and it took another ten years for another group to figure it out and so like that kind of that was kind of like an artifact of the structure of the organism and so to your question about like are we spreading to then or what I think this is where like you have to think of it as vertically integrating the entire thing build programs and systems along the way but build those in an open permissionless environment so think of like the vertical integration that maybe Apple or Tesla or SpaceX would kind of think about but now do it in an open-source oriented way with crypto-native systems like that could be extremely extremely powerful and so I think like by just allowing like creating an environment that people can gravitate to the things they want to accelerate and just supporting them that could just grow super fast Hey guys, Andreas here from Legacy I would like to come back to the AI question alright so since last year you know the big thing that happened in the world was generate of AI I took the world by storm and so I was wondering what are the opportunities for decentralized storage the training data in itself is probably the biggest amount of data out there and it's not just the pure amount of data it's also a governance question who owns the data, culturally as well as from an intellectual property a rights perspective is that an opportunity that you guys have looked at and if so what are you going to do about it Yeah, a number of those training data sets are already stored on Filecoin for example, there's a a very very popular biomedical training data set that used to be hosted by Facebook that a ton of researchers were utilizing and when Facebook laid off the team that was working on that their data set was at risk and it got stored through the de-store team on Filecoin and now researchers are able to fetch that data on Filecoin but also in the future run AI compute jobs on top of decentralized compute networks so that instead of having to send all of your data to Amazon and then run all of your jobs on your own centralized cloud you can harness decentralized compute resources many of which are already doing proofs and other activities in the Filecoin storage market because involved amassing some amount of edge hardware which contributes edge storage but also edge compute resources and so yes Filecoin is actually a great platform for decentralized storage connected to generative AI or generative compute modules and so I think that absolutely is going to exist. I thought the work that WaterLily did which launched right at the same time as IBM one of the earliest smart contracts there which was taking a collection of data recognizing who had originally created that training data set so you could have the actual creators you could also have curators or curator programs something like a DAO or a community that was collecting together that training data set and whenever they ran compute jobs over that training data set they rewarded the original creators of the data set in addition to you know rewarding the actual folks who ran the compute on Buckle Yow directly and so I think that model could lead to a much more economically aligned and kind of regenerative system for creating AI that actually rewards all of humanity's data that it's powering itself off of instead of just extracting that value and rewarding just like a very small company or creator and not recognizing all the public goods it's built on so I think there's a lot of opportunity totally agree with Molly and I think as storage providers in the Falkwood Network are really well set up and have a lot of the hardware already to help with that I think one other property to think about is verifiability within these data sets so you know all these AI models kind of run on a specific data set they'll create and interpret specific things and come up with outcomes but it's really hard to know which pieces of actual data those outcomes are based on I'll list one example right Starling has developed this like pretty cool demo of a crime scene software so you can put on a VR helmet you upload a whole bunch of pictures from a crime scene and then the program will actually fill in the gaps and give you a 360 of like what's going on but for the courts they don't really know which parts of that is interpreted versus actual data and so in this application you could actually click on a specific piece of evidence and find the link to Filecoin on whether that evidence what the data set of that evidence is based on based on the content identifier and a verifiable kind of trace back to the source which is super super cool and so AI is really cool but there's a lot of problems like that I think verifiability is going to be a key solution for I think Filecoin and IPFS solution is the right one and I wanted to jump in on one last example which is around content authenticity I spent my career in trust and safety before Filecoin and one of the big challenges we're seeing today is a lot of photos being shared online in times of war as you guys know right in the backyard of Istanbul there's a lot going on in neighboring countries and so a lot of people want to make sure they can verify the source of information and so starting along with numbers protocol which was presented earlier today they have built an application that allows you to verify that photos came from the point of upload and we use that same technology in Ukraine to verify that photos taken highlighting the Ukraine war could one day be used in international criminal court so there's so many examples of this and another way of thinking about it is also the large amount of data that is generated with AI I was talking to a friend who ran a pretty successful 3D printing company and he said in around a year I have 10 terabytes of storage I need I would love to use Filecoin and figure out how to store that somewhere he's trying to build now 3D printing with generative AI to really think about different kinds of models but at the core all of that requires data stored somewhere and a lot of archival data to be stored and so we definitely think we win the market in terms of cost and other reasons for the use of AI and we will be actually giving a talk about AI in Filecoin tomorrow during Filecoin Day which runs from 9 a.m. to 5.30 p.m. so definitely make sure to check that out as well so we have two more questions and then we're going to wrap so decentralized systems depend on a strong user base is anything being done to systematically educate users in the Web3 space so not just developers but basic internet users thank you there's a certain amount of education for a user you have to for someone who's using an application you need to make an interface that teaches while helping people get their work done very very efficiently I think some of the mistakes that people have made in the Web3 space is like I want every user of this application to become a zero-knowledge expert and see every step of the journey all the way through it becomes very inefficient, it becomes very confusing it becomes alienating for folks who are like I just want to get my thing done understand where I'm coming from as a user and so I think that's where I've seen this happen best is kind of the progressive discovery progressive looking under the layers of the system to get more and more value out of it over time examples of this are I think in Web3.storage for example you can very easily upload your first image or your first NFT or your first file and then it reveals to your Filecoin storage deal and then you can click on that and you can view your content on the IPFS gateway and you can go and understand which different storage providers and the network are storing your content I think there is more of those kind of like informational what are the small ways that we condense a lot of information to make it accessible to pull out the jargon and to instead give people the context that they need as a user but I think there's like a progressive bar there, right? We want more and more people to be able to enter into these amazing projects and open source communities where they want to contribute but we also understand that we need to reach many many many users that are not going to be part of that base layer and the best way we can do it is building fantastic applications and tools that solve their problems without them having to be a part of that user community and so I don't know I guess my default is we should inform the community that everything works, we should have learning conferences, we should have good forums and documentation we should have good opportunities for like teaching and mentoring and then for users it's build a really great application and then find ways to learn more, view an informational pop-up, click into the docs have a visual of how things are working under the hood that's not required to get the work done Alright, last question from a few people in the group some I wish statements especially applied to the social structures of PL or the public goods funding-oriented structures of PL or the coordination collaboration parts so we'd love like five or six I wish PL blank or I wish it was possible to blank and yeah, I'd love to hear your thoughts I wish there was a way to join the PL network as a member and expert and you know leader in these ecosystems without necessarily being a member of a company or research group, you might be a fantastic individual who can give a lot back to this network and I think we should build for that I wish we could get office hours to be adopted by every team in the network and many, many teams outside of the network as well because I think they're just so powerful I wish that PL companies would adopt the very ancient practice of tithing so that a certain small percentage of the money they make can go back into the pool and we can build better public goods I wish that instead of outsourcing to development shops without skin in the game that we looked within the network and figured out how to share engineering resources outside of the PL monolith I wish we had a place where every team in the ecosystem was able to post about the opportunities that they saw are currently uncaptured and for potential new projects new products new friction spaces that need to be addressed for other teams to be able to capture them and potentially create new ventures around those particular opportunities that are being discovered by the crowd within the protocol labs network there is a place for that it's the forum the protosphere cool I wish that people who care about the similar movements and have similar goals really understand how they can work together so that we're not all working in our individual silos I wish so I love the acing and remote culture but I wish instead of lab week we had lab month I wish I wish so there's such a huge body of open source software that makes networks like this possible but not just here in other crypto communities as well and I wish to see PL join other crypto communities in collaboratively funding the public goods that we all depend on I wish to see as create integrations for multiple different public mechanisms like conviction voting among others where people can then bring those mechanisms and fund the public goods I wish PL were more active on Twitter and had more aggressive marketing cool with that let's all go tweet thank you very much let's go hang out at the party