 Oh, can you hear me okay? Yes. Sounds good. I am Dr. Michael Zargo. I did my PhD work actually at the University of Pennsylvania in a group that spanned the robotics in Wharton. So I worked on dynamic decentralized resource allocation problems, which turned out to be really convenient when the blockchain and web three technologies really started to take hold. Because these are equipments for social and economic dynamic resource allocation problems, but with a technological back end that allows us to start to actually implement essentially algorithmic policy. I'm going to talk a little bit from the perspective of an engineer, because I'm a complex system scientist and research engineer, so my research is scientific work on how to engineer things. So I always like to point out that in these systems, we have a adaptive feedback loop between the things that we engineer and the way they change our behavior. And in particular, when it comes to adjusting the ways information flows through a system, we see pretty heavy feedback amongst human behavior. And so when we're talking about, say, funding research or other collaborative systems, we're always going to end up having to deal with this feedback loop. And inevitably, it means that the best laid plans turn into something entirely unintended. So for that, I'm going to zoom out and talk a little bit for a moment about this as an institutional or social system. So we've got a bunch of people doing research, their own interests, their own motives, some financial, some less financial, enthusiasm about the things they're working on. But at the base level, this is an interconnected group of people leveraging technologies to make decisions about where to put money, where to put effort, where to put attention, and other forms of resource other than those that we can directly measure with computational systems. Nonetheless, we have computational tools to help us build these things out. And so what we start with here is this network of networks. And I'm going to use myself a little bit as an example. So I'm a member of this multi-scale network. And I care personally about impactful research. But I also have a family. And I run a firm. And I have some sort of technical participation in some blockchain and web three networks. I have some economic participation in the world. I spend money on things. I get paid for things. I have a professional community, largely the sort of engineering research community. And more recently, some of the fringe economic research community, the focus on crypto economics and sort of incentive design, behavioral economics, et cetera. And I'm a member of a society. I'm a US citizen. I pay taxes. I generally participate in the system that is the existing civil infrastructure. And ultimately, I'm a participant in this shared planet and obviously care about the growing concerns with environmental issues. And what can I or can we do about it? But the point here is actually that this network is not easily flattened into sort of one view. And that depending on context, we have to think about what our goals are and sort of what we're collaborating on. Interestingly, here's a bit of a web of projects that I pulled from a DGov group. And I'm on here. And my firm BlockScience and the Vienna Institute for Crypto Economics that I collaborate with, and a project that I've helped design called the Common Stack, which is actually a sort of set of primitives for designing collaborative systems using some of the same tools that Paul mentioned earlier. There's some collaboration going on about how best to understand these experiments in incentive design and collaborative labor. There'll be a little bit more on that later in the talk. But actually, I want to sort of pause and sort of highlight the fact that this isn't the first time we've ever talked about researching or designing or analyzing social technical hybridized systems. We have the cybernetics literature, which started in the 40s and sort of had a resurgence in the 70s in particular, where the discussion around second order cybernetics focused on the intervention as part of the system or the policy or control design as part of the system. We have the socio-technical systems literature out of the 50s. There's also quite a bit of OR literature from the 80s that sort of addresses management processes as part of an overall system. And most recently, we have the cyber physical systems, which is sort of the main thrust of my current work, where physical systems touch social and economic systems, particularly in sort of higher order combinations of IoT systems, some interesting projects related to community-owned infrastructure, local power grids, and a collaborative project targeted at putting solar arrays with battery systems in place in rural communities in Africa, all sorts of cool stuff. But for science, I want to quickly review this sort of tech stack and bring it down to attribution networks. This sort of tech stack that we're dealing with in Web 3 takes us all the way from the actual data that we're going to trust and we're going to sort of assume is correct in some sense. And I will get into more about what we mean about that later. But critically, we have a distinction between storing data and trusted computation. Trusted computation is when we use cryptographically secured virtual machines or secure multi-party computation to assert that something was done, not necessarily whether it was the right thing to do or all sorts of questions. But we get assertions that certain actions occurred. And those things enforce interaction patterns, like, hey, when you do this, I have to do that. Or if we both do this and this, then this other thing happens, sort of action-action consequence of action, which kind of rolls up to this higher level actual agent behavior through interacting on these interaction patterns, which leads to sort of global emergent patterns. And basically, scientific progress is a global emergent pattern. So somehow or another, we need to incentivize the right local agent behavior in accordance with some potentially well-designed interaction patterns, which are enforced with some kind of trusted computation that is backed up by some data. And I'd like to focus on the data aspect of this. But before I go further into that, I want to remind everyone that whenever we define a sort of objective measure for a system, that it's inherently subjective and that we kind of have to pause and ask if we're pushing the right metric forward. So in the scientific literature, we're dealing with people heavily incentivized by their impact factors, which are in turn affecting their behaviors, which are affecting our actual scientific progress. And what we want to do is remember to ask what we're optimizing against, why we're optimizing it against, and what and how we can change those things. So I would argue that the current set of dominant metrics in the field are not particularly well aligned with what we would intuitively think of as scientific progress. But that doesn't mean we have a de facto alternative. So we have to be thinking about what these objectives are and how to embed them in our systems. When it comes to practically implementing things, I'm going to sort of go back to the same idea that we have this sort of globally emergent behavior as a result of many actors in a network, which comes from local interaction patterns in agent behavior with respect to each other. And our peer-to-peer networks are the tools that we're using to enforce certain interaction patterns. But we can make a strong distinction between using peer-to-peer networks to enforce computation and actually storing, say, all of our data in a blockchain. Like most people, as I've been discussed earlier, who hear blockchain think it's a database, it doesn't have to be a database of all of the data. It can just be a database containing rights, access controls, records of transactions, exchanges of value, et cetera. And we can actually push the things lower into content-addressable DHTs, which are essentially hashed data stores that allow us to identify pieces of data or records of any kind in terms of what they are, meaning data structure, maybe a header and a hash. The name of the thing that you are recovering is just its hash, which is unique to it. And this results in us being able to reasonably deal with some of the data governance issues that arise in research, meaning that blockchain layer need only deal with who has the right to access or mutate what under what conditions, which is actually quite different from assuming that a blockchain controls all of the data. And given the sort of details of this audience and the focus on science, I won't go too much further into this, but I'm gonna take this as sort of a prior to move forward and talk about these systems as attribution and attestation networks, and just sort of remember that blockchain can enforce the rules, the data governance, the model governance, basically anything that you can think of as sort of a business process automation or a real-time audit of a sort of set of transaction rules, but the data itself can actually live a level below. And that can be anything from sort of lab notebooks and computational experiments and data sets all the way up to actually papers and reviews and feedback on that work. So I'm gonna jump to this idea of the attribution network, and I'm gonna, for the moment, use this Twitter graph that was pulled from a conference I spoke at last year where during the conference, people were snapping photos of each other's talks and tweeting things about them, and so we got this kind of snapshot of a mini attribution network at the Complex Systems Conference, which I thought was pretty cool. So I grabbed it and showed here this is the Twitter activity from during the event, but I'm actually gonna take this idea beyond the sort of toy example of Twitter and start to think about it in terms of actual academic research. So interestingly, if you go back and look at the literature, academics have been talking about the academic process for basically, well, as long as we care to look back. And so here we have a paper from, I think, 84 with citations as far back as the 70s referring to the bibliometrics and who's citing who under what conditions, in some ways, a precursor to the impact factor that we have now. One thing that I find really interesting and relevant today is that there's actually a lot of comparisons between these academic networks and firms, even in these papers, which is calling to mind some of the discussions that are arising in the blockchain and crypto network community where we compare decentralized networks and open source software development to firms, we're comparing research and development to sort of centralized firms, and even at this time we had some comparisons to the way that different, like, labor in a research setting matched up with labor in a sort of top-down orchestrated firm setting. Then we can move to our modern view in terms of open source development, which is also very similar to the sort of point that Paul brought across. So this is some networks and some representations of workflows from open source software that was constructed during work on SourceCred. SourceCred is an open source protocol for basically building reputation based on contribution networks in open source projects. There's a sort of workflow on the right, sort of defining the relationship between different kinds of contribution in an open source project. Interestingly, there's an explicit sort of first level form of contribution as a code reviewer. I think this is an important note in the academic industry. Reviewers tend to be hidden behind the scenes, it becomes very adversarial, whereas in this system a code reviewer is actually like a first-class contributor, like you are helping make this outcome better. It changes the incentives from coming in and attacking it and saying it's wrong because you're my competitor to how do I make this work better so it runs. And in the left here we see from a hackathon project we had five teams working on five different related projects and we built a contribution graph to see how all that work was interrelated. People worked on multiple projects, the projects were wired together, so we start to get to this network formation game. And I bring up this network formation game because in fact we play a version of this network formation game whenever we participate in scientific research on the right here we have a sort of sample graph of contributions from source cred-research from about half a year ago. On the left I have a sort of map of strategic behavior in a system. I won't overdo the explanation of the syntax but you can see we have some random external processes driving the system and we have two classes of strategic action. One side is the participants deciding what to contribute so where to put their effort and what to contribute to the project and on the other side we have maintainers deciding what to value. So this system uses a page rank based algorithm for attributing credit which allows credit to percolate through when you say contribute something that other people build upon but there's always gonna be some degree of gaming against the metrics. So you have a sort of second order dynamical system here where people contribute the things that they view as giving them the most credit but the system itself through some maintainer or editor or management role is deciding what's important. So we have a steering system and a sort of steered system but in our actual sort of scientific communities it's a question of who really goes on that maintainer role it could be the sort of board that runs a journal it could be the people who are considered the scientific experts we still have a risk of point of failure if those who are defining what constitutes contribution maybe don't align that with our broader goals. So building on this sort of framing of attribution networks I did some work on it with a behavior economics PhD student from Columbia on attribution networks and we defined some basic structs for basically multigraphs that contain attribution networks and so these multigraphs are essentially data pretty much whatever it is it's listed as a struct here parents represent other data points in the system that you attribute to they could represent authors they could represent reviewers they could represent data sets they could represent code doesn't really matter this is a content address scheme so you put something in you give it a header you hash it you reference it so you can arbitrarily reference whatever stuff you need to as long as that stuff is part of this broader system it is something that I think there are a variety of projects exploring different angles on this kind of thing we were working on this sort of computer science and economics foundations and largely we're enthusiastic by the fact that such a data structure would be completely corollable so you wouldn't need to impose an a priori metric of value on the contributions and say this is what matters you could actually keep the data structures and decide after the fact hey this discovery was really important who contributed to it or one could actually embed the funding early on as a data point and actually say that certain research came about in part because of this funding we don't have to decide that in advance just build the data structures and for a simple example we mocked up stories and authors and sort of represent a graph where purple represents books blue represents authors and you can have this sort of dag which represents the historical contributions references between books multiple authors etc and this is the kind of thing that would come up at a human level if you were using this type of attribution network so this kind of brings me around to how does this apply in a more general setting we've got this big messy network which is who knows what contributed by whom over a long period of time and we want to avoid a single God's eye reputation system there's a couple reasons for this one of them is we don't want that reputation system to become so logically centralized that people game the system relative to it we have concerns about privacy we have concerns about sort of path dependence if someone's reputation is low is there no case under which it can become you know higher again there's a lot of questions but from a most basic principle the idea of using attribution and attestation networks is that we can have localized metrics whether they're at the individual or community level which observe this data maybe the algorithms have the right to access data that maybe even the individuals don't have the right to access and you can compute reputation scores that are relative to the observer which means that they're not equal and this is the key note here is that the reputation of B by C is not equal to the reputation of of B by A and this is because this has been factored out in a way that can both preserve privacy in some cases it can maintain uniqueness it can represent local value systems or local goals and essentially avoid a central view of what is good and again main goal here is to make the system a bit more polycentric maybe not decentralized in the sense that every single individual has their own value system but at least at the organizational or institutional potentially research community level there can be one set of values that is aligned with their goals and avoid a sort of over-engineering to a particular set of metrics so I'm gonna give a quick overview of a commons design that working on with a common stack project and we're gonna use a walkthrough where this is treated as research as an output so this system is got a sort of outer wall that represents funding so you can have investment style funding here on a bonding curve and you can have grant or sort of general sort of research funding coming in on the top the system's job is to essentially use the funding to allocate decision-making power decision-making power is used to allocate grants grants go into buckets which are milestoneed and upon completion of milestone there's value outflows this essentially creates a sort of machine that combines decision-making power and money to produce material output but it literally only makes sense if the material output produces some form of end user value and this can mean that the research goes into another funnel maybe it's a drug clinical trial pipeline and this is only basic research maybe this is the output of software that other people fork and build upon the key here is that this is actually meant to be abstracted away from the particular application and just highlight the decoupling of the sort of funding from a basic research perspective and the funding from a sort of value capture perspective and to combine the decision-making with the funding without necessarily assuming that everyone is both and traditional equity environment you generally get a stake in the in the future cash flows but you might not have any expertise to drive the entity or you might have people who have a lot of expertise maybe scientific expertise to help drive decision-making who don't necessarily have the same role as funding providers so there's a bit of an effort to view this as a a regenerative value system which pumps out the actual in this case scientific output or development output where it feeds forward on the funding loop it feeds forward on the decision-making loop without assuming that they're the same and without assuming that this is going to become a massively centralized system but rather potentially a web of localized versions of this mechanism and that they can collaborate without necessarily having exactly the same goal system so this is ongoing research actually the project that's doing the design and the simulation testing is called the common stack and its goal is to create basically commons enabling infrastructure using the web three stacks so mixture of smart contracts and data structures etc and so this kind of brings me to my my last point which is interdisciplinarity in research so as a basically computational social scientist slash cyber physical systems engineer I'm constantly jumping across these boundaries and it's it's pretty challenging because there's a new set of jargon in every single discipline there's a new set of tools there's a new set of publication expectations and so I am like to kind of pull this up and show that we have this semi-siloing but the interconnectedness nonetheless and sort of invite everyone to try to you know take things like this conference and other cross-disciplinary conferences as opportunities to build what is essentially Rosetta stones or language mappings to help talk with people from other disciplines whether it's medical sciences law economics computer science etc I have done quite a bit of work with the Vienna Institute for Crypto Economics at the Vienna University of Economics and a forthcoming paper of ours is a research roadmap for crypto economics and I will sort of bring up this massive table and maybe just highlight the top here we've identified the relationship between what people call crypto economics at the macro mezzo and micro levels and realize that that even this term is being used differently at each of these scales I'm personally very fond of the mezzo level definition which is coined by Jason Potts who leads the blockchain economics initiatives at RMIT in Melbourne and that basically amounts to institutional crypto economics and how we make decisions policy decisions and it turns out that when it comes to incentivizing better research practices we're actually working on the mezzo level of crypto economics what are what are we doing what are we rewarded for how are we rewarded for it it embeds some objectives in which come from the top level but it actually has to realize them through lower level incentives along the lines of the work that Paul is doing so I invite you all to read this paper when it comes out in two weeks it's going to also be submitted to the journal which was seen a later speaker is the managing director of I'm done here's some references to a bunch of cool projects in the broader research and common space so thank you very much