 Welcome. And thank you for joining us. My name is Carla Kersenow, and I'm a research program manager at Protocol Labs Research. And as Jason mentioned, this panel is about the role of tools in accelerating scientific discovery. So there's an ongoing debate as to whether the pace of scientific discovery is slowing. The data on this is inconclusive. It might be that the increase in complexity of research at the scientific frontier creates a perception of slowing scientific progress. In this panel, we suggested our tools and incentive systems must be re-engineered to match the complexity of the scientific questions we face. As new generations of scientists build upon the work of their predecessors, the scientific frontier becomes denser, more complex and more deeply entangled. In this panel, we propose the tools for discovering and navigating complex thickets of scientific evidence and tools for transferring knowledge between individuals, teams and disciplines are especially important. In this scope with the increase in complexity of the knowledge frontier by studying longer, specializing more, and working in teams, tools that assist in knowledge discovery and knowledge transfer between individuals and generations can accelerate scientific innovation. These tools include tools for discovering and mapping the knowledge frontier, tools for sharing the current position of the knowledge frontier in a discipline or a suite of disciplines, and tools for supporting, managing and incentivizing research at the knowledge frontier. We think that one of the roles of science outside of academia is to develop and experiment with these tools and funding models. So this panel brings together some researchers from academic institutions, not-for-profit research organizations, and industry R&D labs, and we'll talk today about some current implementations of these technologies. So we'll start off with Ben Reinhart of the Astera Institute with a discussion of institutional and legal structures as tools for accelerating innovation and discovery. Next, Joel Chan of the University of Maryland's College of Information Studies will present tools for creating, navigating and sharing composable modules of Providence scientific evidence. Then I'll present some lessons we've learned at Protocol Labs Research about technological road mapping as a tool for incentive alignment and managing information flows. The next Cheung-Won Byun of the Research Lab-OT will discuss applying advanced language models to automate research workflows. And David Lange of the Experiment Foundation will round out the session with the discussion of prototyping new science funding models to accelerate research. We'll be short, about five minutes, and we've left plenty of time for discussion afterward. So we look forward to chatting with you soon. And now, on to Ben. All right, thanks. Let me just share a quick presentation. Do that. There we go. All right. So I'll go through this very quickly. What I want to suggest is that, sort of, to expand the, we think of tools as perhaps software, perhaps things that you use with your hands. I want to suggest that institutional and legal structures are tools for accelerating invention and discovery, and especially in the meta science context. Right now, most, as far as I can tell, most meta scientists I would describe as either naturalists or theorists. So they go out and they see what's going on in the world. You know, it's like, they watch what's going on, they make these amazing observations, and then suggest theories around them. But if you look at how many other fields of science are conducted, there's a piece missing here, and that's sort of like the experimental piece. And so I want to raise questions about how do we do experimental meta science. And to some extent, there are experiments going on, either you look at natural experiments, or you look at like sort of convincing some agency somewhere to implement a new policy based on some theory of meta science. But I would suggest that this is fairly limited. And so if we sort of step back, meta science experiments are institutional experiments. They're, they're very coupled. And sort of the, the, throughout the history of science. I think we've, we can see this progression of the experiments that people can do based on new tools right like this is why, you know, it's like some ridiculous number of Nobel prizes are given out for new microscopies techniques, because you can't. You know, experiments without new tooling to either conduct the experiment or measure its results. And so what I want to suggest is that by creating new institutions, that is a way to actually do these meta scientific experiments. And so how do we think about these experiments. Basically, when you boil it down, institutions are incentives wrapped up in a coordination mechanism. And so there's there's sort of many way different ways that you can take this. And perhaps we'll touch on some of that later in the talk. But sort of digging one level deeper. I want to suggest that legal structures are actually a tool to tune incentives. And so, you know, it's, it's this like boring thing that people often don't like to think about. There's a shocking amount of flexibility in the legal contracts that you can write. And at the end of the day, those are coordination tools. And so, not only are institutions a way to do experiments, but legal structures are also a way to do that. And so, so one example and that I'm in particular working on is a institutional experiment where we're building a hybrid nonprofit and for profit with different funding structures different legal structures to test out the hypothesis that there is a sort of a range of hypotheses that are not being promoted right now. And that it might be possible to do that better. And we're going into this with like a ton of hypotheses. And so that sort of gets into the, the question of like, what, how to actually do institutional experiments. And at the end of the day, it's the same way you do any scientific experiment. You use a scientific tool, which is like you go in hypotheses, you use a lab notebook you document it really well. And so what I sort of want to broadly suggest is this this idea that institution new institutions are a tool for doing science experiments, particularly the legal structures of those institutions. And just for, for more information about what I'm in particular working on. You can go to that link. For many other, there's, there's a lot of institutional experiments going on that perhaps need some, some like meta scientists to come in and like study them. Maybe someone in the audience and so I point you towards some of the same arguments work. And also the things that David's about to talk about. And if this was this was meant to be just like a little teaser. If you're interested, please reach out to me. Awesome. Thanks Ben. And now on to Joel. Let's do this interview visible. Let's jump in. So, the work that we're going to talk about today is aimed at this goal of removing barriers to effective synthesis so any scientist can ask better questions faster with emphasis on the former. Another example of what I mean by synthesis is something like this. Esther Duffer, the Nobel Prize winner, recent Nobel Prize winner in economics, credited a masterful survey of the literature that she got from a random book chapter that received laid out the open problems and the path forward through a space of questions in developmental economics and that was instrumental in guiding her path to applying her innovations in experiments. So I want more of this. The problem that we're facing is going to argue that we have the wrong unit of analysis in our common scholarly communication infrastructure that puts a barrier to synthesis. What I mean by that. If you think about the information tasks when you're doing synthesis when you're looking for ideas where you're looking through the literature trying to make sense of a field of study. What you care about ultimately our ideas claims arguments theories and findings and discourse relations between them. What supports each other what opposes replications contradictions between these ideas. Instead, we get documents, metadata, article types as the main unit of analysis that we're able to search for manage in our information software and so on. That's not great. If you see the symptoms everywhere. So what I'm showing is a graph of the breakdown of the different tasks that's required in order to complete a systematic review. It takes a long time. I'm paying attention to the specific fact that about half the activities on there are essentially working around the broken communication infrastructure. Right, you spend a lot of time and a lot of resources to search for the thing you actually care about right papers with juicy titles don't always have the thing that you want. And then you have to download them you have to beat them, you have to screen them extract the data, and then you get to do your synthesis. A lot of overhead that you think is unnecessary in a wonderful workshop on automation of systematic reviews over the summer. I made this spicy beam in response to a very good discussion about the fact that a lot of the work that's being done innovation wise and smart reviews is essentially trying to compensate for the broken scholarly communication infrastructure. It's not too controversial but we like this panel to be spicy. So, fortunately, it's not that we don't know what could be better. So there's been a lot of work on this idea of, I'm going to call them discourse graphs, where the units are not documents but instead their statements something like a claim or an evidence piece of evidence and the relationship between them are not sort of course citations but our discourse relations like support position. So it can look roughly like this is not always graphical but you can think of it as an underlying graph, where the units are discourse units like statements and the relationships are not necessarily causal or ontological but discourse. And there's been a lot of work and packing the potential of this for actually supporting the kinds of information queries we would like to do, and to get straight to the task of synthesis that we care about. There's been a lot of wonderful technical work and infrastructure work to kind of build the warehouses that we can use to house these things and kind of store nano publications micro publications and so on. The problem we're facing right now, in my view is that they're still mostly empty. So there's a kind of evocative phrase here of at the moment there's no more than a puddle even though what we want is an ocean right we would like to have a lot more of this so that we can support the synthesis that we like. So I think of this as an authorship bottleneck, as opposed to kind of fundamental technical standard problem. So I think we have a pretty good idea about what we want to have. We just haven't figured out how to get people have tried lots of specialized curator models where we have crowdsourced efforts or be trained specialized people. And these tend to be very accurate but they're expensive and slow and they're hard to sustain, require a lot of funding to keep going. On the flip side, and I do some applied machine learning on the side as well, and text mining has potential down the line, but for tasks that require a lot of accuracy. I don't think we're quite there yet. And so I don't think it's safe to sort of do an autonomous thing by itself humans have to be involved somehow. So we still have to figure out this incentive problem. And the concept I want to plant in your hips is this idea of scholar powered contributions, where we can integrate the contributions to a different kind of infrastructure into what scholars are already doing in the usual practices. And can think of this as a really fun UX design problem, right how do we build these like tools that integrate into what people are doing anyway. So it's not as much of a, we have to kind of use the stick. One basic implementation of this that we're betting on right now is a basic kind of three part idea of we build tools that hook into your annotation and note taking tools so that you can build a personal discourse graph, not for some unknown other but for yourself. And we have the tool improve your own thinking with discourse graph, and then we make it seamless for you to share or federate some parts of this with others that you know, again, not some distant other. And then over time as these grow, right as we build momentum then we can start to aggregate these into decentralized comments of discourse graphs. That's the rough game plan. Today I'm going to quickly show you a proof of concept for what this could look like right now. I think there's a slightly better chance that in this crowd, you may have heard of the niche tool like room usage, but we'll see. So it's essentially like an outliner like workflow we let's cross with a wiki. But you can kind of use it like, like a Google Doc where you can sort of write things in there. So let me quickly show you what it might look like. So the idea is to be able to take notes close to how you would normally would right so you can imagine saying I have this question about susceptibility of young children to COVID-19. And you have a bunch of sources that you want to make sense of like what's how do we synthesize our understanding there's no RCT that you can run to really answer this question. And so you can kind of make your way through the papers. You can go to take notes on this particular paper. For this question and kind of take notes like you normally would right what are the aims of this paper. What did they do right you can kind of say like take notes on the data collection. Aggregation techniques, setting and so on. And you get to the contribution, which is the thing that we actually care about right what is the what's the result. The main result here is that you know this meta analysis estimates roughly two times the rest of the body, not definitive on its own, but it does contribute to our thinking. So we're going to make this a building block. So with this extension says let me mark this as a piece of evidence. So now it's a formal this cross graph node. Okay, what can we do with it. So let's say we are synthesizing, right, instead of just a bunch of sources, we can now start to reference pieces of evidence and claims. Right so each of these things here is not a citation or paper but actual evidence piece. And so you can start to add that to this emerging synthesis and say okay, there's also a meta analysis that estimates lower susceptibility and then reference it. And so near your drawing connections between all these elements you're building out your formal discourse graph. What do we mean by a formal discourse graph well in the background we're actually building a label property graph. Right, but she can actually leverage. So you can sort of go into like a playground, and then you can run a something like a. Actually, let me just do it actually in front of you, right. If you've seen anything like cipher or ceramic that works on label property graph this is essentially a graph query right find evidence where the evidence informs. Right, you can query this. Now all these come back and you can say now we know if interest in this question, we can find these evidence instead of the papers. Right, and if you want to you can pin them, and you can start to use them, draw them on the map. Right, start to use them. And if you like, you can then export this into a Neo4j compatible CSV markdown, JSON, and so on. So you can share that with someone else who's interested in the same question, or your student want you want to onboard sort of dumping a pile of papers on. Okay, so that's the proof of concept. Super quick. There's some key intuitions to highlight. First, we want to integrate the formal into the informal right so we're not notice that nobody's writing complex XML, or direct coding of oncologies or graphs, they're writing notes as they probably normally are in this software, still kind of unfamiliar but in the software this is normal. Right, you can use indentation to indicate relationships and referencing formal units, and they can translate immediately useful notes with implicit discourse structure into a reusable shareable explicit discourse structure. And then the other key intuition is that we're not making people do this for no immediate benefit. Right, so the queries actually help people track all the evidence that they're trying to use, and please piece them together to help their thinking. So you can write the share with someone else that they know, instead of publishing it to the open web for unknown others. So those are the key intuitions that we think will help kind of make this help this game momentum. And other than these queries can also imagine, because it's a formal graph, you can do computations on it, right, you can use it to find evidence that supports opposes a claim, compare support opposition for a claim to the graph, you can run graph queries and things like that. So we think this is a promising building block for moving beyond the PDF and high teams for papers, we can start by just facilitating collaborative synthesis. And as we gain momentum in scale up and we can do decentralized federated publications to data streams where people can subscribe to graph queries. Right, I want to know anytime Jung one has added a evidence that opposes some claim that I believe it. We can be able to sort of share things that way in these decentralized peer to peer ways to respect the contextual and contentious nature of all this. Okay, so that's that's it. Super fast. But hopefully, the key points to keep in mind is that we need a different infrastructure if you want to accelerate synthesis discuss grasp it help but it's really hard to have sustainable means and we have some pieces now that we think that help. That's it. That's really cool. I'm going to share my screen now. Excellent. I hope that works. And I'd like to talk about technological road mapping as a tool for accelerating innovation. The problems that faces at the scientific frontier are complex and solving these complex problems will require collaboration among cognitively diverse stakeholders and across scientific disciplines, institutions, and indeed entire economic sectors. It's going to require clearly defined milestones defined using appropriate figures of merit that orient these stakeholders to the problem space and align them toward building appropriate solutions. And it's going to require the transfer of knowledge across disciplinary boundaries, institutional boundaries and incentive systems as science becomes technology. This is a coordination problem, and also a communication problem. A technological roadmap is a tool for solving scientific coordination problems. A roadmap is a comprehensive goal centered model that describes when significant R&D milestones in an area will be reached and outlines the relationships between the milestones. It's centered around a potential breakthrough innovation. And I'm shifting technology that influences multiple aspects of human experience like agriculture, writing, radio, the internet, things of that scale. Roadmaps concentrate expert attention on a scientific problem. Our roadmap empowers researchers in fields relevant to the goal state to make informed decisions about how to shape their area of interest to reach that goal. So I see an effective roadmap outlining the problem space and highlighting promising routes toward a solution. It should enable distributed path finding where participants in the technology system can balance pursuit of their own local incentives with a broader view of the system and its global incentives and optimal. The roadmap should be a living document, a process that runs on people, and a large part of its usefulness comes from the drafting process from identifying and assembling stakeholders from airing goals and concerns, matching interests and capabilities, sharing latent knowledge and building consensus around key problems, milestones and metrics. And so structured planning and identifying dependencies can highlight useful connections and also possibilities to re architect or recombine existing capabilities. So how do we build a roadmap. Well broadly speaking, we start with a broad goal that can be tracked using at least one figure of merit, and these are measurable benchmarks like resolution or clock rate or something like that. Building our community. We see what we can do and are doing now in the area and find relevant stakeholders to invite to the process. And then the roadmap architects identify milestones these pivotal developments in the evolution of the technology that can be identified using the state of our chosen benchmarks. The road mappers can reassess the current state of the art, according to the chosen metrics and criteria. So we loop and layer steps two to four. And as we do that the community, the road maps community grows as we iterate this process. So because road mapping is a community exercise, it's important to build a user interface and a user experience that allows users to easily grow contract technological milestones. The interface should allow stakeholders to populate the model with current data to break the problem space down into discrete milestones to identify leverage points with particularly high impact in the problem system and drive distributed coordination around these problems, incorporating version control can enable distributed participation in roadmap creation and support a dynamic maintenance and update schedule. We want to create a reusable infrastructure for roadmap creation to enable decentralized peer to peer incentive alignment between stakeholders. The goal is to make coordination easier, not just between researchers, but between researchers funders industry teams, ethicists technologists industry R&D labs, anybody who could be potentially be in the roadmap community. So if you have questions, since Joe brought up the topic of spicy questions, would it be possible to self organize a grand scientific endeavor on the scale of the Apollo program, or the human genome project entirely self organized. What would the technology to support that kind of a project look like. I think part of the technological kit would be a roadmap, and I look forward to discussing this idea with you. Thank you. Alright, I think I'm next. Hi everyone I am John one. I'm the CEO and co founder of aught is a machine learning research lab and we're building illicit. I'm mostly going to give a demo of how illicit works. I'm a research assistant using language models to automate parts of the research workflow. So I'll show you what it's like and then I'm happy to answer any questions about illicit or working with language models. So, this is the homepage of illicit there's a task dropdown menu, you can choose from a bunch of different research tasks I'm going to focus on the literature ones we're experimenting with now. There are others you can use like finding experts labeling data sets brainstorming, etc. And you can go to listen.org now and sign up for an account if you mentioned meta science I will give you access. So I'll choose this literature review task and here the goal is to help help someone quickly ramp up on a domain that they don't know very much about. And so I will try a question first the researcher can just ask a question that they're interested in so I'll do. How can we improve diversity and scientific disciplines, I think this was a topic of various talks in this conference. If you want you can also give a few examples to kind of guide the model a little bit more. And then basically what we're going to do is we're going to use language models to generate additional questions that might help you answer the overall question. You know, kind of help you flesh out exactly what it is that you're looking for, and then retrieve papers that answer those questions so right now you can get results in the CSV I have a formatted version of this in Google Sheets. You can see it. So here are the questions that the models have generated so my overall question was how can we improve diversity and scientific disciplines and GPT three suggest thinking about how it's a proportion of women and scientific disciplines changing over time, which factors affect diversity. What are the major barriers to women and minorities entering scientific disciplines that are there's a long list. We give you kind of a what what the language model sync is the relevance of this question to your overall question. And then we're pulling in different publications that address this topic. So these are the titles of those publications what collaboration experiences across scientific disciplines and cohorts. This one you can see is completely irrelevant right what factors affect diversity and species composition of dry forests. So this is where it's really important like Joel and I think others have said to think carefully about how researchers and people are interacting with these models and building good systems for giving them feedback and iterating. And then you know we have kind of publication links so you can go through and read the papers and some snippets that have relevant parts of the content. You can sort by citation count or by year, it's up to you. And overall what we're trying I mean the kind of status quo for doing literature review is fairly sequential right so you go to Google Scholar type in a query. And then you can sort by results open a few PDFs skim through the abstracts refine your query in general and as a person you can really only do one thing at a time. But with these language models you can kind of imagine deploying a bunch of research assistance towards different and to explore many different researcher directions. So you as a researcher can kind of maintain this high level view and think at the level of what exactly are the questions I really care about. If I were writing a one pager on this what would I want to you what would I want to answer my like sub headers to be, and then kind of review all the publications and more of a batched way so we're taking a process that people kind of do in a one by one manual process today and trying to batch them so that you can evaluate across the whole set of relevant topics relevant research directions and relevant publications. And then from here there's a lot of work left to be done this is still still a lot of work and ultimately we want to get to a point where we really try to minimize the amount of content someone has to absorb and maximize the amount of insight or like reading time basically. So there are a few different directions we're thinking of here. One is to get better at generating these types of questions like right now we have 50 kind of generated for you by default but that's really a lot. So ideally we give you back a much smaller number of questions but they are very, they give you kind of maximal insights so if you were able to get answers to each of the questions we give you, then you know your question is your overall question is answered or you got about as much as you could about your overall question so like trying to create a more formal decomposition here is one of our goals. Then another thing we want to work on is pulling relevant papers as well as helping researchers evaluate how trustworthy those papers are, given that right now we're kind of targeting this towards people who are more generalists and you know we're trying to learn about the new domain. It can a lot of the feedback we've gotten is I don't know if I'm in a new domain I don't know how to evaluate the reliability of these papers. So you know we can give things like little like citation counts or other metadata to help them make the decision, but ideally we have some kind of richer way of saying here's how an expert in this domain would evaluate this paper maybe that expert knows about the top journals or maybe they have other heuristics they would use. And then lastly, we want to eventually get to a place where we can more efficiently kind of directly answer the researchers original question. And there, I think what we want to work on is returning you know more intelligent snippets than what we're currently getting which is like a fairly naive approach. This is probably where eventually we will just query all of Joel's annotated evidence and claims that directly answers the question. But until that happened, or while we do that in parallel will try to actually read through the text of the paper and return the most relevant parts of the paper we really want to minimize the experience of reading an entire paper and getting to the end of being like that had nothing to do with what I was looking for. So that's illicit. And, you know, our overall kind of approach is to iterate quickly with researchers, figure out, you know what are the core kind of building blocks of the work that they're doing, how do we take processes that are kind of manual and ad hoc and batch them and you know in this way also explore are there can the tools like these actually change how we think about research can we get research to a point where the researchers kind of organizing their own quote team of researchers like every researcher has kind of a team of research assistants that are maybe language models and they are coordinating the work and thinking at a high level about the structure and then and then executing the work through in some automated means. Well, that is an incredibly tough group to follow that was really inspiring everyone thanks for sharing everything my name is David Lang. I'm going to share my presentation right now. Can you guys see it. Okay, cool. Okay, so the title of my talk is experimenting with new science funding methods. I can see it. I took a rather unorthodox path into science. I'm not a academic scientist, I kind of came in through the side door and and I want to tell you that story just so it's kind of clear. My friend and I were wanted an underwater robot so we could explore a cave and we didn't have much money and at the time the tools. For remotely operated vehicles ROVs cost, you know, upwards of $50,000 and we just did not have that kind of money and so this was 10 years ago. And so we started this website open ROV and started sharing our designs of what we thought could be a lower cost model for underwater robot and our design was pretty, you know, it barely worked. But eventually we started building this community of folks who are excited to do this with us and it got better and better and we got incredibly lucky in that we got a small grant from an ocean foundation who liked what we were doing and was enthusiastic and gave us $7,000 to build 15 more prototypes. And they said, you know, we're just going to write you a check just bring us the receipt so it was a really fast small grant that got us going. And then we launched the project on Kickstarter and grew and it became wildly popular. And we started this company called open ROV and we sold thousands of these underwater drones all over the world. And now there's a host of these new tools for, you know, many of them for less than $1,000 you can go on Amazon and Google underwater drones. It's now a thing. And as part of this adventure over the past 10 years I got invited to a number of like the, these ocean conferences with the leading ocean scientists and technologists who would have us, you know, to these conferences to explain our history and some of the leading scientists in the world and leading marine technologists in the world were so amazed at what we what we had done. And this was so surprising to me because we were just two guys in the garage incredibly constrained and we were doing what to us seemed obvious, just asking the internet for help. The interactions with scientists over the years kind of brought me to the realization that scientists are actually incredibly constrained to, in the sense that they all have to publish papers constantly. And their whole careers are really dedicated on what fits into this unit, this really, you know, kind of limiting unit of a publishable paper. And it creates these blind spots around tools it creates these blind spots around science communication and and a host of other things So the past 12 months I've been really thinking about those experiences and kind of set out to answer this question is, is there a way as an outsider to help make science better and one of the things I've really hit home on is this idea of improving funding mechanisms and dynamics because it's the thing that so many people, so many scientists that I encountered, you know, senior scientists, early career folks had mentioned to me was seemed like a really big problem. And so I teamed up with my friend Cindy and Danny who started this website experiment 10 years ago it's a crowdfunding website for scientists, and they've raised, you know, more than $10 million for more than 1000 projects. And I created the experiment foundation and our goal for the past 12 months has been to actually experiment with new ways that foundations and companies and federal agencies can fund science in new ways. And I really encourage you if you've never been to experiment go to experiment calm and see some of the, the variety of projects that are there there's a, it's a, it's really an explosion of creativity in kind of small scale science most of the projects are less than $10,000 the site. And so I've tested a bunch of things quadratic funding. I started doing some playing around with NFTs for science but the one that's really stuck, and has been really interesting is this idea of angels science angel investors kind of micro DARPA program officers. And I got the idea from watching a meta science talk two years ago from Paula Stefan when the leading economist who studies the, the, the science of science funding, and Carl Bergstrom was in the audience and asked, Hey, we're here at Stanford, we're in Silicon Valley what can science funders learn from Silicon Valley. And that's actually a really interesting question and so I kind of map this out of like the, the, the, the black is all the financial investors and mechanisms that exist for supporting ideas, and then the green is kind of the science funders and you'll see that there's nobody who's really doing a good job in the really high risk appetite, high autonomy and smaller amounts like really small like $10,000 and below. The NSF tries to do this but you know, there's some some good research on where the bureaucracy still kind of steps in the way. I think this actually does happen. And, but most of this is what Paula Stefan calls the back burner which is, you know, PIs or or professors who have these kind of budgets that they tuck away from other grants that they can help support grad students in different directions. It's not ubiquitous. It does happen a lot, but it's not something that happens all the time and so that was my goal is like can we formalize this can we create a program to create the science angel investors. And so I wrote this essay about it, you can read it at cybetter.com slash angels, and I'm really excited to say that we've gotten now funding for up to nine science angels that we're going to launch this year. And so how this is actually going to work is each of those science angels is going to have a budget of $50,000 and they're going to be able to put up to $5,000 really quickly into any project that kind of they think is is interesting. And so one of them is launching this week. And like I said, I think we'll have up to nine launching by the end of the year so we're going to do this small fast grants test. In real time on experiment you're welcome to follow along if you're interested in and potentially participating send me a note. But yeah, it's going to be really fun. It's going to be interesting. And I think we're going to generate a lot of hopefully new insight and new data on alternative funding mechanisms because I really do agree with what Ben said earlier and that there's an incredible amount of interesting insight that's happening in the meta science community but we need more experiments we need more data, especially experiments that kind of color outside the lines that we've been that we've been traditionally playing in so anyways happy to answer any questions about this or any of the other projects we're doing and looking forward to the discussion. This is the blind handoff to me. Right. Exactly. The baton is passed. That way. We've already had a lively discussion in the q&a. I do have a question for the attendees. I don't remember if we set the settings for q&a to display on certain questions in addition to open ones. Can somebody confirm in the chat whether they can see both open and answer questions or just open questions. Excellent. Okay, good. Yeah, so because there's some good stuff in the q&a that I didn't want to go away. And we sort of like put some answers in there but I don't want them to go away. So I think I'll start by highlighting a fun question from the from the q&a that I think cuts across a couple of different. I think the three of us, Karola, myself and Jung Won. But also I think touches on the potential point of disagreement between Ben and I so between Spanish has requested that we keep this spicy. Jennifer Byrne has asked a good question about how to deal with evidence that's been withdrawn, such as retractions, and I'll add on that beyond formal retraction right so that we've now have the ability to know when experiments haven't been replicated or questions have been raised about the usefulness of the study. And I think it connects to, you know, the three of us are sort of thinking about mapping the moving frontier of areas of research and how do we deal with that. And also, is, is it worth a while to try to still map the ideas on the frontier, since it could change so quickly, or is it a better use of our time to lean more quickly more on connecting people to people to to get a sense of the frontier. So, since I'm the moderator and not an answer, you know, blob it to the panel to discuss. I would argue very strongly that you should be mapping people on the frontier. I mean, this is this is just my, what I would argue that the way we should, then like thinking about sort of like some graph in terms of even like, so Joel you're you're arguing that we should think about it, not as sort of like papers as but as pieces of evidence as as nodes. And I would argue that the pieces of evidence should in fact be like edges that point us towards people. Because I think, again, it's like there's there's so much tacit knowledge in in the world of science that I think that it's a it's a fool's errand to try to make all of it likeable. That would be that would be my argument. So, Ben, I'd ask you, following up on that, you know, we saw a handful of really interesting demos linking ideas and of course there are others to like site and, and there's a bunch of really interesting work happening there. What, what are some early examples of linking people or connecting people in the way that everybody actually learns about a field. No, no, I'm like, have you, I know what you mean, but like, Right, like, not, not in terms of tools. Like, I mean, the tools are basically like, go on Google scholar type in a subject and see who has the most citations and like go talk to them. Right, like, I haven't actually seen any tool like, like people centric tool. I think the problem is that people don't scale right so then presumably everyone who wants to learn about computer science should go talk to the top computer science researcher but they just can't write like that person doesn't have any time so how do you overcome that limitation. Yeah, well, I mean I would also argue that the top computer science researchers probably not the person that you necessarily want to be talking to right like like the there's there's probably someone who knows a specific thing. So, so really what I would say is like we need more tools to point people at the actual right person as opposed to the most prominent person. So you're absolutely right that people that like people don't scale but like there's enough computer science researchers probably have adjacent interests that if you can hunt them down. You can find them like so for example like, I had to go down one day and I just like the difficulty email and I was like yo, your research is like really interesting to me like let's talk about it. And I got way more out of that than reading like I read through all this papers but like a 30 minute conversation is way more valuable. There's a couple of different directions to friend this and I can riff because there's aren't there aren't additional questions in the Q&A. But please if you do have questions please put them in the Oh, great, we have some. Excellent. Okay, so these are good follows. People talk about orchid at rest could need oh we have a service focused on people research cases focused on people. Yes. Michael Carl says can we just use a wiki to map the knowledge frontier. These are excellent questions. So, can I respond to Richard. Yes, I think I think Richard you're probably pushing back against me which is great. I would argue that research is not like the fundamental unit in research gate is papers like research gate doesn't even like it's like this gated service and it doesn't even tell you people's emails. It doesn't tell you like pay money or sign up. And so I think like, that's, that's a fundamental failure of being focused on people. Yeah, but I wonder if like you still to figure out. Yeah, so I don't disagree I think there is, you know, a lot of times talking to a person for like 60 minutes is like going to be way more efficient than spending that equivalent time doing really you can get access to that person and organizing knowledge by people I also agree with. But I wonder if they're really so distinct. So to figure out who is a relevant person you probably still need to parse through like what they've written and what they've thought and so maybe if you want to build a tool that is people centric. That's not going to look for a lot of the, a lot of the path to building that tool it's not going to look too different than parsing documents and things that they've already written. What would you say is the fastest way to get the understanding of somebody's research just thought would it be looking at the titles of their papers on Google Scholar, looking at a list of their co authors trying to build out the social graph, the people that they write or work with. And is it, are those, is that even a useful distinction. So I was that question and that me or anyone wants to take it. I mean, so again, I'm, I'm like this is where I sort of I think I go against a lot of people but like what I do, like I think the best thing is find someone like find just like any start note like find one paper that's interesting. Talk to the person who wrote that and be like okay, who are the people you pay attention to. Yeah. I think my answer would be, ideally get as easy as possible the like, you know, their top publications, some like web snippets about them. Like, you know, other things other like kind of information co authors affiliations and like be able to browser them super quickly and then compare across many people to make the decision about who is most relevant. I personally find workshops to be super valuable for gaining access to people who are a super knowledgeable and be not over leveraged, because we get lots of people who are. So my concern my concern about leading on the existing infrastructure from finding people, and that includes networks of social social networks is I worry about the false negatives from sort of privileging people who are well connected in the social network who are high in the prestige hierarchy, and over people who have different perspectives to bring that may not come to light, because they're not as well connected to the core. And so like that's the thing I think about in the back of my head and why I think I like the idea of like, if we can get to one of my secret goals is to lower the barrier to putting useful things out there that people can use to find you. Like so, for example, you know there's, there's nice documented bias against novelty and interdisciplinarity. It's difficult to get things published and so those kinds of perspectives have a longer lag. And so you're using papers to find these people, or, you know, other people to find them, you may have like a plus one to two. Yeah, so I think there's this a lot of room for innovation then so that's why I like sort of mixing different strategies. If that makes sense. I think we take a stab at, at Michael's question. Yes, Karla, Karla, you seem, you seem well situated. I was going to point that at Karla. Yeah, I'd be interested in your take on, on Michael's question. Okay, thank you. Yeah, it is a great question I've been chewing on it on a side process while listening here. Can we just use a wiki, a wiki to map the knowledge frontier. And that's interesting. Part of, I'm thinking of, maybe bring this back to Ben's idea of the fact that perhaps people are the useful encapsulations of knowledge. A wiki is a nice way of bridging the difference between sort of having a knowledge that's ossified in text or knowledge that's static in text and a dynamic system of updates that's largely driven by a social system when you read about the community that that one's Wikipedia there's a lot of there's a very intense social dynamic behind the update process behind the talk pages behind the discussions of what goes into Wikipedia, and what what doesn't. And would it be possible to to port that same dynamic to a very fast moving frontier, where you can't have the same types of appeals to authority. I'm not sure that the wiki with the wiki lacks is a sense of the goal. So, at least the way I'm thinking of road mapping. It's a goal centric exercise you identify what you want this positive future state to be. You're not just recording all of the information in a discipline, you're encoding, encoding a subset that is relevant toward progress against this goal. But I think that because this is such so goal driven incentive driven that perhaps communication between individuals. In a, I'm thinking of something more like a causal influence diagram as being the better format for having this type of conversation, where you're actually, you're seeing the relationships between these milestones you're seeing how things influence other things. You're not just putting knowledge in a graph and having each note have equal relevance there's there are weighting is there and there's a directionality there that may not be present in a wiki. It's my long winded response. Yeah, so Michael follows up in the chat. I think it's a good follow up question I think. Yeah, because one way to interpret the question is, can we just use Wikipedia as our map of the knowledge frontier and another way to interpret is can we use wiki technology as a medium for implementing a roadmap and I think what Michael is saying that is a ladder from the comment is like wiki has is a suitable strategy but actually, I didn't say but I am open to. If you would like to follow up with your with your voice. I'm happy to think that the power. I do not have the power to ask people to to speak. Never mind. Yeah, but if you'd like to follow up in the in the chat please do. Okay, another question for Karola, and then I want to some point link back to the funding piece institutional piece. Andreas asks, where does the information in the road mapping access come from people reading thinking. One more question, I think that a lot of the initial. The initial stages initial iteration would be based on literature search and from the knowledge it's encapsulated in a small code of initial road mapping architects and their breath first search over their social graph of experts in the relevant discipline. I think that as you loop through the process, the balance of information that's being derived from textual sources from published literature, and the balance it's coming from late information from things that people know from their own lab practice from guys from, you know, a story that they heard from somebody in the next lab over. I think that begins to grow and become more important, especially at a fast moving frontier there's a lot of stuff that has not yet been written down yet. So I think there's the question from Myra, your music. Yeah. Sure, yeah, happy to answer it so I'll put this link in the chat of the one we're going to launch this week, just so you have like a real live example of what we're going to do. So the way this is going to work is we have a $50,000 budget and so you know anyone can submit a project and will the way that we're going to fund it is will if it is relevant to ocean solutions which is kind of a loose term we've created. We're going to be the first 50% of the product 50% of the funding up to 50 or up to $5,000 so that's like functionally how it's going to work so you know we see this being a really good way for projects that just need you know, five to $15,000 or $20,000 to get going to get initial data to get off the ground and so the projects to to receive in order to receive funding they do have to launch a campaign, a project on experiment. And the reason we're doing that is is because it just kind of makes them more serious about actually doing it. This, this process of asking for help in public is really a kind of an emotional journey for folks to crowdfund I don't know if anyone has crowdfunded for anything but asking the internet for help is a really sobering thing and so we find that process to be a good filter for serious about these, these different research questions. And then I'll put the link to the explanation to. Are you planning on doing any thinking about which projects and that being most successful or impactful. As far as we are. I think there's a, there's, there's different ways to measure that. You know, this is this, any kind of like science of science funding question this always comes up is like how do you, how do we measure what's actually affected. And it kind of stops the conversation and so I don't want that conversation that question to stop the conversation or the tests. In the near term how we're going to measure it is additional funding that comes in so additional funding that's raised on experiment, or if people are able to go on and get NSF grants or NIH grants or something like that. We think that's a good signal and in fact that's how most VCs measure their returns in the short to near term is how much additional VC funding comes in. In the near term, what we'd like to do is use a mechanism, and I have a link to the study by Wagner and Alexander. It was an assessment that they put together for 15 years of the NSF SGR, they're called sugar grants, which was supposed to be their smaller high risk high reward grant program they had this whole analysis and so we'd like to do, you know, in five years we'd like to do a similar analysis to that that's I that would be ideal. I have more questions if there aren't more questions from the audience. I'm kind of curious David if you've seen whether this. So I remember I read the kind of write up from fast grants that Tyler Cowan and George Mason ran. And what I think they found was surprising was that even researchers from top institutions ended up applying for fast grants, so you might believe that, you know, well, researchers that like very well known institutions might have an easy time applying for traditional grants, but they you know their experience and maybe not the case and I'm curious if what you've seen so far is whether something similar like you know researchers that you might assume ahead of time could get more traditional grants are also applying to this or if this is encouraging kind of nontraditional researchers or like early stage researchers to participate in research. Yes, so his kind of historically who's been successful on experiment and there's published research on this is early career researchers and women do better on the platform. We see a lot of that we see a lot of grad students. Postdocs like people who just need a little bit of money to finish their studies, but we do see senior researchers who have some kind of crazy side project or, you know, I was talking to Dan Jaffe who's here at UW in University of Washington and you know he's an established professor but he wanted to measure air quality around railroads and it was like it was just a better fit for an experiment project to get like a community involved with that. And so we see folks doing that too so and then we also see like DIY bio labs like the open insulin project raised its initial funding on experiment so the kind of amateur science scene also does okay there. But I, you know, if I had to like say a majority it's the kind of grad students postdoc level. And I would add to that though, like what we're trying to do with the science angel stuff. And this is something Ben and I've talked about to is like, I think Ben you call them reputational cascades or what did you call them something like that. You can either call them like reputational cascades or like legitimacy bootstrapping legitimacy bootstrapping so you know like you think about one of our science angels like we're going to pick people who are, you know, admired scientists right and for them to be the first backer into one of these projects and say look I believe in you. This is great. You know that can mean a lot for an early career researcher I know I've heard that from a lot of people that like them defining moments of their careers weren't just the biggest grants they got they were the first ones and they were the ones where they had a senior who would say, Hey, this is interesting keep going. So we want to create. It's not just about a new funding mechanism what we're really trying to do is create like a different funding dynamic so that people feel like they're getting a boost of not just funding but also like hey keep going in that direction like that that might be interesting so. Yeah, it's really interesting. I think there was an analysis of the emergent ventures process along some lines as well as like. I don't think they use that term legitimacy bootstrapping like it matched the shape of it, like picking promising people and avoiding them going into mediocre routes, right, like changing their trajectory. I think Tyler Cowan refers to that as like one of the most like high leverage activities you can do is just like encourage someone. So, you know, I think emergent ventures is very like inspirational for what we're doing with the science angels I think the question becomes, like, how do we make that how do we have an end of 2345678910 of what Tyler Cohen's doing and does that does that emergent ventures model scale. Can we do it with science. And can we use it like a platform like experiment to make to like weed out any potential fraud or make this stuff to make it better. So, open question. Oh, I think Vanessa leaves soon but there's a question for all panelists so from address what are some ideas for medicine that feel a bit too crazy to share publicly. Like, like, I can't, like, I cannot answer that question. Next panelist. What a provocative question. I think something, oh, something that that is is slightly below that level, though, that is like, I think the question of like, what if we give funding agencies less money, like what happens right like the forbidden question of like what happens when you like give out less money. Do we have some data. Sorry. Okay, Carla. I mean, I think if we have. If we reason from the opposite, we have a little bit of data for one that the NIH doubling occurred. And I think that that was, they became actually much more conservative and gave a lot more money into building infrastructure so buildings new labs, more senior researchers. And this was a time when the NIH budget for the biological medical science was increased significantly. So, I mean, the opposite case we have some data on major reductions in an agency's budget. I don't know if they've been watched as quickly. Yeah, I mean it's been happening, right. NSF and an age budgets and kind of squeeze. I think that like if I think there is a real taboo around funder efficiency. I mean, like, you know, everyone's scared to like, bite the hand but like, I think like, you know, you try and tie like bets that like program officers that have made and like trying to create kind of a system around funding reputations. I think that is no one would go near that for lots of reasons. So, I mean that I mean to be honest with you. That's a little bit crazy to share publicly and I think that's an interesting part about the science angel thing is we're actually saying hey this you, you are the ones who make the bets like you, you, you are the individual making the choices here. You're going to be responsible for those choices. And so I think, you know, that kind of culpability doesn't really happen in science there's a lot of, you know, the peer review process dominates. There's not a lot of people who stick their neck out I mean DARPA funding is a little bit different than that. And I think that would that might be interesting because I mean what we see in the financial world is like investors have reputations and it's, it's really helpful for the whole system that there's like a class of investors who are making decisions and doing things so I think I, you know, I think that's an interesting question but I don't, I don't think anyone will go near it for a lot of different reasons. Another topic that could be controversial is like automating grant propose proposals and grant applications. We think about this right like unsurprisingly and like on the one hand it's just such a huge drain on researchers I think a lot of professors have been stats like they spend 40% of their time writing grant proposals. I think people have like different takes on, you know, some people feel like they're just kind of regurgitating kind of, you know, boilerplate text and they're not it's not like, there could be a version of it that actually forces you to clarify like your project I think a lot of people feel like it's just kind of, you know, adhering to the process, but on the other hand, I feel like I think people will probably react pretty negatively to that. Maybe if you feel like it could be automated then you ask like why does it even like if proposed could be automatically generated why did they exist at all and yeah. You know, I've seen that firsthand because James Weiss at MIT recently published a paper on Delphi he was using machine learning to try and predict where breakthrough research might happen. And just, it was a pretty strong blowback from the research community that like we humans need to decide where we go it can't be. It can't be AI and it was interesting because you know it doesn't I don't think it has to be either or Yeah, so it's right in there. Yeah, along those lines. I've been thinking about what would happen if we stopped measuring quality altogether. I try to predict impact. So there's been some ideas thrown around about essentially literally doing a lottery for grant funding, and just like because the, the, the noise is so high that and the cost of trying to measure quality is high as well. That you might as well just do a lottery to everybody. And this is not totally crazy because like the prediction problem is so hard. Right, like it's so hard to tell. I think it's easier to tell how risky something is in terms of how different in this from everyone else. But optimizing for that also seems like a bad idea. And so like, but I think a crazy idea is just like do away with Evaluation, but that doesn't seem I don't know like way up about that. I feel like this is a type of institutional experiment that Ben should run. I would love to compare like fun performance across one fund that just chooses randomly or one one science research funding fund that chooses randomly and one that is like tried to you know uses a status co approach and stuff. I think this is underway I think I think Karl Bergstrom has a paper on this and I think a few others may. I mean this has been coming up for the past few years in the meta science communities like should we just do a lottery instead. And I think the thing that's like unfortunate about that is like, the conversation has kind of stopped there, where it's like, let's just do it let's just do peer review or let's just do it a lottery. When there's actually hundreds of other things we can try to and this is like, I think, you know you look at like the, I created that map of the financial investors versus the, the scientific investors. You know, I think that's the interesting part about like the angel investor idea is that like the dynamic is just so different at the on at that side then it like, if you're like a private equity company or you're a banker, right you look at risk in a specific way. If you're a NSF peer review committee that's allocating hundreds of millions of dollars you're going to look at risk in a specific way. These kind of, you know, new startups we have the same kind of prediction problem where we don't know who's going to be really valuable and the right thing to do is to be why combinator where you just launch thousands and thousands of startups in the world. And you, you actually push push this kind of risk seeking behavior out to the edges at smaller amounts and try and get more people onto the playing field to kind of try all these different directions. So I think, you know, the peer review process makes sense for these huge NSF budgets NIH budgets, but it doesn't make sense to have that same dynamic for the smaller amounts, and like out of the edges and I think that's where like the highest margin opportunities are is to get more people, especially early career researchers who are struggling in so many other ways right like there's so much other kinds of pressure on folks in that position that, you know, if we're going to start looking for ways to improve it that's a good place to look and to start. So like almost like a universal basic income but for researchers for early stage grants. Hey that's interesting, but that's like the Canadian model or like some of these other countries, you know, have have different models like that. That could be compared, which I think has been, I don't know that literature up top my head but here's another potential point of disagreement then but let me highlight the response in the Q&A from Richard Ryan. Stop measuring impact and start measuring rigor. So that's another good like question in terms of it's not like peer review or no peer review but like what is the thing that you're focusing on. I mean there's some precedent for that with like a lot of publications are doing results line right so it's like not about like trying to predict how interesting this results going to be whatever it's like just focus on the methods. What do you all think. So by rigor do we mean how well you have adhered to the process that you set out you describe and say your pre registration report, or how, basically how well the study was conducted, how replicable the methods are. What is that generated what is, what's the definition of rigor that we're adhering to here because I like the idea, if it means that we're basically incentivizing the Journal of negative results where you're incentivized to run very well designed studies, and put your data out there publisher results that we know and we get a no result in an area. But I think I have to understand what the what the rigor criterion is. I don't know if there was a, I, I'm sorry for mentioning the journal thing I think it's sort of like, I don't want to conflate because you've been talking about funding, which I think is is different, potentially then deciding what gets published. And we've sort of like not been, we've been dancing around a bit in terms of like the goals of like, if the goal is like on the margin we need to increase more risky, more risky research to increase the novelty frontier, then, you know, this is what we should do. versus if currently the goal is to have a better balance of, you know, replicability to non replicable results, then you should stop measuring and start measuring rigor. So just highlight, there's like two, potentially two things being discussed in terms of why three, right, we talked about big funding, small funding, and now journals. I don't know, maybe Richard can clarify in the chat. What's that by rigor. It's fun. If it's favoring the methods over the impact section of a grant application. That is certainly one potential that that's an interesting that's a very interesting suggestion because I think the downstream effect of that is that if you fund in that method, you will at least be you'll be creating. Again, you'll be upping the replicability of these studies and you know that if you're trying to accumulate data, sort of to the survey of the field so that's that is a direction to potentially pursue. This question made me think of a different kind of possibly controversial question which is, do you guys think researchers and academics should be a there should be mechanisms by which they could like be millionaires or billionaires, just like, succeed with kind of outsized financial returns. As a result of their research. I do. Yeah, and I think there are. If they can commercialize it. Right. Yeah. Yeah. Okay, I guess it wasn't that controversial. Can you can you mentally simulate why someone might find it controversial. Well, I think back when like Jim Simons left math to do to start rent tech, he got a lot of like criticism that he was selling his soul basically and like academia is pure and you shouldn't care about money and I don't know how much of that exists in you know in which disciplines today. But yeah I think it's I don't know that. I just maybe it's like kind of related to impacting presumably you need some way of measuring impact and maybe some people would say it's becoming you know research shouldn't be so metrics oriented or so commercially oriented it should be more open ended and in a long term and a lot of them will fail and it's just like a very hard thing to impact and if you are to measure impact and if you kind of create this pressure cooker environment it'll be to worse research or some other different just it'll distort research basically. It's bad for like science that you know that things have become so like citation oriented, and so like metric oriented I think it's actually bad for researchers to focus their careers on that like I think there's actually, there's a wider lens, and having the bigger picture in mind is, you know, maybe not good for like short term bid liometrics but I think for like the long term, broader impact and opportunity for a scientific career. I think it's better to have a bigger picture in mind. I'm incredibly biased because I've only had this kind of outsider perspective so I've never cared about citations and metrics. This is a weird crowd to science, even the academics here. I don't know how controversial it's going to be. I have a question for you from Michael, but it's the tool you should publicly usable and you can put a link to I guess. Yeah, you can go to elissa.org. Yeah. So I want to tie it back. So we sort of talked about mapping the frontier first and then we talked about funding, and I want to tie it back together so David and Karola you're both on my screen side by side, and I can kind of see if I can caricature a little bit. I think I'm hearing David say do less top down selection and empower sort of smaller angels to take their own bets. I didn't hear anything in terms of guidelines or constraints from them other than you talked about the ocean solicitation and it has to go out for an experiment. I was curious about the relationship between that sort of selection model if you can frame it that way, and a technological road mapping model which you can caricature as trying to get a big picture and have a thoughtful selection of the pieces in that way. So I can caricature if you both of you think differently about or similar ways about selection, right, like how do you, how do you place your bets. And maybe there's actually not a conflict but I'm curious about your thoughts on that. So the concrete question for David to start with this. Well, I think sometimes you want to be divergent in your thinking and you want lots of new ideas and you want to be going in lots of different directions and sometimes you want to be convergent and I think, you know, a lot of people, we were running these like, oh, it's kind of like XPRIZE and I was like, actually it's nothing like XPRIZE because XPRIZE has a really specific outcome and they have a specific goal and they try and bring all these different people in to solve a specific goal. Whereas when we put up a challenge grant we have like a loose idea and we're trying to send people off and lots of different directions so the XPRIZE is great when you want to get to the answers. And what we were trying to do is trying to, how can we generate lots of new questions. And so I think that there's a time and place for both. I don't think that I don't think it's either or at all. I think sometimes you want to be doing one and sometimes you want to be doing the other. And to my mind where I see all the new systems and infrastructure going where like the PARPAs and the like the focus research organizations and like everyone is doing a lot of, there's a lot of really interesting stuff when it comes to solving specific problems and going in a specific direction. What I wanted to see more innovation around is how are we going to send more people in weirder directions. And so that's kind of that's the reason I'm focusing there is because I didn't think there was enough attention being placed there, but I think both are really important. Very diplomatic. That's a very good point and I have no noticing you and sir because I do think that these are really parallel processes or processes that have to run be looped constantly and continually and I think that a lot of the funding that David was talking about is about exploring, even expanding the research space seeing what is what really is the adjacent possible and beyond that. And the idea of a roadmap, it takes into account what we believe to be within reach. But again the goal is meant to be extremely broad it's it's beyond the scope, even of an XPRIZE it's something like fire. So it's something that would perhaps only emerge by aligning a huge number of incentives that would the goal might not only emerge until we've had a lot of conversations and we brought together products extracted a lot of latent knowledge from experts that we didn't even know were related to this this goal so I think it's something that becomes progressively clearer as we iterate this discussion process. And knowing the map is necessary knowing that having a sense of the territory is just necessary when you're defining a path toward the goal. That was less less spicy than I hope. I think this is good. Another connection back to the earlier threat on people versus ideas. I'm wondering about any connections back to that threat from the point of view of trying to figure out. It seems like having represent if your goal is to increase on the margin, the number of weird ideas that are going out, and the risky bets that are happening that should be happening but are not. I don't see a strong case for having kind of detailed maps of ideas. But if you do want to sort of specify, like, what's the term use corolla like causal causal diagrams or like having causal influence diagrams causal influence diagrams that there seems to be potential value there for having. Like I guess the question is like when you in your experience when you draw on these experts are they are they mostly pulling from the heads or are you getting information from elsewhere and this kind of touches on a question that came before but you can sort of go back on that. In terms of the framing of, you know, people versus ideas. I just noticed that a bunch of people have their hands up I know where they do. Oh, wow. I don't I don't know how to like, that's my fault. Yeah. No, it's all right. Thank you. I'm sorry I didn't mean to interrupt. I just know we have no that's great. Please ignore my question because I want the, I want the attendees to. Oh, there are folks for the next session. Okay. So that means they're raising their hands, not for. Got it. Got it. Okay. All right. They started from this accident. Okay, cool. I didn't mean to interrupt you. Let's go ahead. I think I finished it but I don't know if I want to hear the answer. It's a good. I am I'm very sorry I had my training thoughts would have interrupted there and I was gosh. I think I'm going to punt it to June one because I'm curious. I know you didn't show, but illicit has that fantastic workflow of finding people as well. And so super curious for the people mapping new areas and getting the new years. Did you see people combine them in interesting ways or like what's the relative strengths and weaknesses of the different workflows. Starting with generating questions and finding papers relative to finding people and reading their papers. Yeah, one of the reasons we launched the find experts workflow which like Joel said works pretty similar to the one that I demonstrated except each row is a person and you start with kind of like a seed list of people you're interested in and then we'll like pull in their Google Scholar profiles, etc. One of the reasons was that was that some people when they're learning about a new domain want to start with the kind of top authors or like, you know, start in like a more people focused way. So there were definitely people who use that workflow to do something similar to the literature of each task. And I think similarly in literature review, part of how you evaluate whether or not a publication is legitimate is by looking at the author so I do think there is a lot of interplay there. All right, Karola if you've caught your train of thought that's, you can jump back in if not, we can actually wrap up soon. Two minutes from the end. If we only have two minutes I have one question that I've been dying to ask of you. I'm absolutely going to punt that question USB and ask you. Two minute warning. What is the incentive for somebody to create a nano pub or participate in a discourse graph I brought up the iTunes example of the iTunes papers and I remember the famous ad for that a bunch of songs in your pocket. The idea of ownership was really important I know academics. Sometimes they cling to their they like to exert ownership over their ideas until they're ready to put them out into the world in the format that they choose. Yeah. So why would they choose a nano pub. There's a very important difference in the framing between why would someone choose to publish a nano pub, and why would someone choose to create something for themselves and their students. So I've observed very clear incentives to that for yourself once you see how it improves your own thinking helps you to like feel better about your ideas it helps you not freak out when you see people come up with papers that seem similar to yours. And so those are all intrinsic motivations for why you would do something for yourself and your students. You know, the students love it when they don't get like dumps like 50 papers. Instead they get a bunch of things they can consume the incentives for publishing the nano pubs are so thorny that I don't want to touch them because there's lots of different things in there that make it very difficult to gain traction. I don't think it's unsolvable. I just don't know how to solve it. That makes any sense. So the answer is intrinsic incentives. You get better ideas to get better papers. And so it works for, I think most of the people in the meta science crowd. All right, we are at time. I don't know if the we need to do any more wrap up, but we don't have any open questions. Thank you everybody for the really robust discussion. Thank you for facilitating you was fun. Yeah, thank you all. If anybody wants to go to the slack and maybe continue to chat there but I certainly learned a lot and I've already signed up for a list it and maybe I'll get some funny from an angel for my next grant. But yeah, thank you, thank you very much. Great session. Excellent. Thank you. Bye.