 So, we're going to start with our first talk today this morning by Raymond Chen, and the title of his talk is Accelerating Academic Research with Impact Certificates. Raymond, take it away. All right. Thank you very much. It's lovely to be here. Good to see people strolling back in from the break. So my name is Raymond Chang, and today I'm here to actually represent the Hyperserts Foundation. The mission of the Hyperserts Foundation is to find new forms of funding for public goods like science, and today I'm going to talk a little bit about some of the latest thoughts that we have around rewarding positive scientific impact with impact markets and impact certificates. So, button. As a former academic researcher, I was at UC Berkeley, I was a faculty member for a short while at University of San Francisco, I came to the Hyperserts Foundation with a pretty simple question, which is how are we going to get an order of magnitude more funding towards basic science and research, because if we're going to get to the future that we all want to get to, there's all sorts of fundamental questions that we need to ask, whether or not it's renewable energy, AI safety, secure infrastructure, curing diseases, what have you. What's core to all of these problems is that there's fundamental questions that we need to answer about how the universe works and how we're going to get there. Funding capital markets are pretty good at funding problems if there's a clear path to revenue ahead of you, but they're not fantastic at solving these long range problems. For things like this, we're going to need some form of public-private coordination. So there's all sorts of evidence that shows that we are just not sufficiently investing in our future. This is a graph of funding over time broken out by different sources, and what you see here is that while US GDP has been growing like this, it's been super linear in the last few decades. Our investments into basic science and research is sub-linear. And while the share of industry growth of how much we're putting money, industry money into science and research, it is growing, but not proportional to the amount of growth that we see in the economy today. It's okay, what are we going to do about this? How can we possibly create markets that might be able to incentivize private money to be able to incentivize investments into longer-term science and research? Well, it is true that we need better civic engagement, political efforts to develop more funding into government agencies. It is the case that we should think about how to better allocate capital resources in philanthropy. Today I'm going to focus on a particular hypothesis that we have at the Hypersource Foundation around building markets around impact. And so that starts with measuring more methodically what I call scientific impact indicators. It includes creating a transparent, open-source, collaborative database of verified impact. It includes driving experiments in new funding mechanisms. And ultimately what we want to arrive at is a rich, heterogeneous ecosystem of impact markets, where we can fund things for the future. This might actually look a little bit familiar to you, and it's pretty similar to how carbon markets work today, but we're kind of imagining this in the space of more general-purpose scientific impact. I'm not going to have enough time to actually go into detail and exactly how every single one of these mechanisms work, but I'm here today because I'm excited to talk to all of you, and if you want to go into more detail, please come find me after this talk. The bedrock of all of this starts with measurement, and the last panel actually did a fantastic job of explaining some of the limitations here, but measurement is the bedrock of creating an economy like this, and we need to be able to move beyond citations towards a rich ecosystem of impact indicators. So similar to how countries today are starting to move past just using GDP as a single indicator for success, we're moving towards baskets of indicators that demonstrate the health of an economy. We need something similar in science, because while citations are fantastic at measuring how ideas permeate between the academic community, they are not a measure of how ideas in science impact government, how they impact industry. For many fields, patents are also not a perfect indicator of that either, though they are one of many indicators that you could use. And ultimately, citations are not a measure of impact on how science impacts the broader society. So if I can come up with a simple example, computer security is a space that I come from. If I could imagine what those look like in computer security, it would be something like, well, what are the vulnerabilities that are being disclosed in these conferences? How long does it take to fix them? What is the severity of them? We have standardized measurements on how to classify severity of a vulnerability. When it comes to crypto systems or distributed systems, we can measure the time it takes to adopt these into market, and what is the reach of these systems when they ultimately do get into market? And we can also measure potentially how different is this from the counterfactual? How different is this? How much of a change, a disruptive change, have we made in terms of people's thinking? Imagine if we actually had these impact indicators spanning over decades. How would that fundamentally change how we do scientific funding policy today? Conferences and journals hold an outsized influence on how we think about reputation and management. I think there's a lot of opportunity here to collaborate with conferences and journals to be able to do more impact measurement. But ultimately, I think we're going to need more support from the funding agencies themselves to be able to measure this impact to be able to attract more capital in. I just wanted to highlight really quickly another project that we're working on at the Hypesurge Foundation, open source observer, where open source software is another public good that is heavily underfunded. And we're trying to create automated mechanisms in a space where it's significantly easier, where you can see the entire software supply chain in an open source way, try to automate some of the impact indicator measurement. Once we have better impact measurements, at the Hypesurge Foundation, we're also working on an open source data model where we can transparently, traceably, and transferably measure the information that comes from these impact measurements. We need to work together to build an open source database of impact. And ultimately, we want to be able to use this to better communicate how impact is affecting the funders, potentially attract more funding into the industry, into science, better communicate to policymakers what's working, potentially signal what things may not. And perhaps even more importantly, indicate to society the impact that science is having on the broader world. What we're hoping is that by capturing impact into standardized certificates, we can create a powerful new feedback loop that doesn't quite exist as well today as it could. So we have coordinators generating work that's generating impact. We have evaluators that are evaluating that work and making impact evaluations against these impact claims. And that, hopefully, can better fuel this feedback loop between funders and projects so that funders can have better steer their money towards things that are going to generate impact. Potentially, these could also serve as better vehicles for international collaboration where countries can also coordinate around shared goals and shared outcomes that are desirable. What we're hoping for is once we have this bedrock of impact measurement and impact certificates, we have now a fertile testing ground for all new sorts of potential market mechanisms that we can test on top of them. So we have plenty of grants today, but imagine a world where we could have better bounties, potentially more fine-screen bounties, potentially recurring bounties. Now we're starting to enter the world of things about retrospective funding. We can also experiment with novel types of mechanisms like advanced market commitments or dominant assurance contracts, all sorts of different mechanisms today that can help assure that we're going to move towards the forms of impact that we care about. We can also learn from other adjacent fields as well. So in social services, we have things like social impact bonds now, where we're starting to test whether or not we can create financial incentives for social goals. Similarly, in health care, we have things like capitated markets. We can learn from these adjacent fields and see if we can create markets that incentivize positive social outcomes. Ultimately, I think by experimenting in markets for impact, we want to see if we can get more money into public research. We want to see if we can generate better alignment between the scientific community and industry. And we want to see if we can potentially have better measures of ROI, not necessarily fiscal measures of ROI, but what is the return that we're getting on our investment? Ultimately, the goal here is to create incentive-compatible mechanisms where institutions and researchers can get rewarded for generating work with more impact in ways that are compatible with the existing incentive mechanisms that we see in science and research. So I'm going to list a number of open problems. We're not going to have time to go into detail of all of these, but these are the types of things that I would love to talk to you about during the conference. So first of all, top on my mind is I do not want to create a mechanism that crowds out non-market values. Scientists do not go into the space, myself included. I'm sure all of you guys relate to this, do not come into the space to get rich. And I want to make sure we can maintain a culture of public good and open access. And I don't think that impact markets are going to be a cure all for all forms of science. I do not want to create a system that overly skews towards fields that only have clearly measurable impact. I think there's a way that we can design this that can be more balanced. There's also going to be limitations in our ability to foresee what constitutes a desirable outcome, what constitutes measurable impact. And so I'd also don't want to be short-sighted in a very different way. And largely from an organizational logistical perspective, this is going to be quite difficult. This is going to be a big lift for our community, which is to say that impact measurement, as they mentioned during the panel, is expensive at scale. And there's going to be fundamental problems around how do we make sure that we attribute the right impact, especially if we're measuring over decades. But if you're interested in these types of problems, please come find me. I think there's a lot of opportunity here to innovate. Thank you. Great, so we've got about two minutes for questions. Anyone have questions? Jordy Goodman, Boston University School of Law. I have a question about bias and how you're going to incorporate, especially gender bias or race bias into the impact factors that you're going to use for this. Yeah, so that's a great question. And I think one of the things that I want to embody into the system is, like I said earlier, heterogeneous baskets of different value systems encoded in different impact valuators and different impact indicators. So I don't see a world where every funder is going to agree on exactly how to weight every form of scientific impact indicator. I think there will be communities and there will be funders that weight things differently. And one of the things that we should absolutely be measuring are goals around DEI, et cetera. So I think it's going to be up to the funder ultimately, but my hope is that we can create indicators that are a broad measure of all of our value systems. I think the goal of the impact indicator is not to strictly measure financial outcomes. The goal here is to embody social outcomes or embody other types of outcomes that we care about as a society that are not strictly measured in terms of cash flow, which is what markets are designed for today. So I totally agree with you, and I think there's a rich space for innovating on different types of indicators. So we've got time for just one more quick question and maybe a very short answer. OK. So thank you very much for this talk, and this question is not for derailing your idea, but to just wonder how this example would fit in there. So in 1919, we drove ships to India, and we looked at the sun, the solar eclipse then, and we measured the general relativity. And so this was never intended to be able to do GPS 100 years later, which now saves millions of lives. So how do you see these very unconnected things working in your system that have very long-term implications, but we don't realize them now? Yeah, I completely agree. I don't have a clear answer to this. I think this is going to be a longer conversation, so I'm happy to take this offline if that's easier. But ultimately, I think that speaks to what I was talking about earlier with respect to attribution that is challenging. And I think there are impact markets for impact markets where you can create secondary markets as well, depending on how the flow of influence kind of spreads through science. But let's talk about that after. Thanks. Thank you. Thank you. Thanks.