 Thank you guys very much for joining us. So anyone who's participated in a DAO or any other sort of self-governed system knows that actually keeping track of contributions is really hard. So I'm gonna give an introductory little story to help motivate why this is hard. So two-person model. We've got a kitchen and it's dirty and I'm gonna wash the dishes and maybe sometimes my wife is gonna wash the dishes and the problem with this has actually observability. I notice every time I wash the dishes and how hard or easy it was and I get all of the observations about my contribution and likewise she about hers. But I only sort of see when she washes the dishes. Like I notice the effects of it. I don't really know how hard it was or how much time she spent doing it. I have a model of that in my head, but ultimately if you really think about it I'm probably gonna overestimate my contributions and underestimate hers and we're probably gonna have a little like, like, you know, I'm doing all the work. What are you doing discussion? And it's in good faith and friendly because, you know, we're on good terms. But as a community grows that's gonna get harder and harder because the strength of the relationships between the people involved is gonna get a little less personal and a little bit more like based on the data and based on, you know, processes and just things that get kind of coded into these systems either in the rules or in the tech. And so I just want people to understand that like this is a really hard problem that you cannot actually really finally solve. But a lot of teams are trying to help people do it better and help them understand that difficulty. And so on this panel, we're gonna explore some research about communities who have built tools and use them. We're gonna explore some new tools that are being built and what they're motivated by and I think I'm gonna start by inviting Ellie to tell us a little bit about her research in particular because her work focuses on ethnography. So it's a little bit more qualitative. It helps us understand how people feel about those systems, not just what those systems tell us about people. Sure, thanks, Argam. And I should acknowledge that I've been doing this work with Argam and some of the insights here are his. So we've been looking at what was called the cred sparement. So this was Source Cred, who was still building to the best of my knowledge, tools to map people's contributions and reward them. It's not used just by them, but they did dog food, their own products. It's also used in a number of other Web3 projects and some with a great deal of success, I should say. And I want to first acknowledge that when Source Cred decided to undertake the cred sparement, i.e. paying themselves through this algorithm of their own making, that a number of things occurred. But I think most importantly, look, people got paid and people who were from marginalised backgrounds who might not have got paid did so. And there was a great deal of important documentation. But there were also some hard lessons that we'll learn along the way. And I think it's worth quickly outlining those to set up this panel. We know that legibility is important in these tools. You need to be able to see them. You need to be able to understand what's going on and you need to be able to use them effectively for policymaking of that community. And this, I think, is probably the crux of where Source Cred itself had some problems and struggled with their own tool. And the learnings really that came out with that was. I think I would summarise it as the awareness of what was occurring was obscured either as turnover occurred in the organisation or as priority shifted, depending on who was there or it might have been the bottlenecks that occurred with conflict in the community. So what we're kind of seeing here with these dynamics is influences which are not really easy to understand other than through observation, through attention, which is where ethnography comes in and trying to understand the effective presence, you know, the actual experience of being involved in a project such as this. I would also add that I think the learnings that they came up with, but which we had the privilege, I suppose, of witnessing, we're around the need to really put in place forms of governance or organisation that are going to enable a permissionless organisation that's working with tools such as this to be able to use them effectively. So as someone said, you need to institutionalise the feedback that is coming through your experience of working with this tool. And the other, I suppose, key insight that I want to put out there now would be the importance of what people often refer to as on-boarding and off-boarding, but I think it's broader than that, which is documentation of those experiences. And really ensuring that things get collected and documented as you go, because if they don't, that knowledge is lost and people come in and they decide that they've got a great idea or they have a particular feel or something that they feel that they want to contribute to the group and not realising that that's already been done before. You know, I once did an ethnography of an organisation where people had to retire at the age of 26. It was a youth organisation and we saw kind of similar things happening in this permissionless organisation. You had people re-learning the same mistakes or putting things forward and not realising that there was a high degree of fatigue already at that idea. But as soon as they put in some documentation, for instance, a simple thing like how to facilitate meetings, anyone could actually come in and do that. So there's some initial ideas of the challenges of these systems and where that particular community, which I think is probably the most long-running community, has attempted to use these tools where they landed and then their knowledge of them. Thank you so much, Ellie. And please, I encourage the audience to sort of check out Ellie's research as a stream of papers on these subjects. And it's a really valuable contribution to that memory, to that documentation. I think, you know, going along this line of, you know, documenting the specific experiences of communities, we also have this sort of role, this public good of the sort of researchers themselves, gathering this information, then synthesising it and publishing it to try to facilitate some of that transfer learning from each other. In terms of other communities that have experimented with measurement tools, Jeff has been involved with the Common Stack and the Token Engineering Commons and some other communities that have done a variety of participatory policymaking. And in particular, used some Discord-based tooling called the PraiseBot to sort of give each other credit props in various ways. It's a subset of some of the functionality that emerged in SourceCred, but it is significantly more legible in that it's very clear who praised who when, but there are very few constraints. And so ultimately, the process was determined largely by human behaviour and the community went through a learning process of what that meant for them. And I'd really love it if you could tell us about that experience and those learnings and maybe, you know, what you take away from that as you move into and collaborate with other communities. Yeah, definitely. So it was really interesting and I feel like the Praise, like this goes back even further than the Common Stack back to Giveth. And for anyone who knows Griff Green, he's one of the biggest hearted people in the space and there is this massive gratitude culture and just thanking each other publicly in Discord. Like, thank you for doing this thing and that would get logged. So that was the role of the bot is essentially to take these statements of gratitude, which is a nice interpersonal touch, recording it, and then of course later quantifying it, which is where the conflicts eventually come in to all of these systems, I think, and quantifying that and then that would turn into either a distribution of C-stack tokens, which is the reputation token of the Common Stack and the Trusted Seed, or Impact Hours, which was at once a governance tool and also a remuneration tool, reward tool for the token engineering commons. So it was really, really interesting to see it evolve because I think there were multiple parts to it and the interpersonal dynamics and just the gratitude culture that it espoused was really beneficial socially, but then quantifying that always led to differences of opinion. And I think there were, you know, most people were praising each other and then a few people were quantifying that praise. And I mean, it was an open process, but it was also a very time-intensive process. And this is where, you know, SourceCred or Coordinate or other tools kind of lower the admin overhead on that as opposed to going through, you know, a sheet with thousands of different praises and then trying to say, well, what's this one worth compared to that one? And I mean, you kind of can turn things into buckets, okay, a tweet is worth, you know, a 20th of a paper is worth a 100th of a white paper. I mean, but it's very subjective. It's very, very contextual. So really amazing experiments and I mean ultimately leads to sometimes conflict in communities who have differences of subjective value on what was completed. You may think that this type of work is worth more. Someone else may have a different idea and it's only the people really in that role in those shoes that can understand sort of the depth of experience of what not that goes into their work. So really I think interesting experiments, but ultimately what we need to do at the end of the day is apply sort of a, there's a concept called Wittgenstein's ruler. So if I take a ruler and I'm measuring a table, am I using the ruler to measure the table or am I using the table to measure the ruler? And a lot of the time in these experimental tools, I think we get a little bit carried away with tool worship across the space in general, but we need to test the outputs of these systems against sort of our subjective rubric of did that measure what we wanted it to measure? Because as soon as you incentivize anything, you create that behavior. This is Goodhart's law and well studied in this space. Wherever the incentive is, you get the behavior. So I think it's just really important to make sure that we are revisiting these core values that we are trying to measure because if we don't do that, we kind of get lost in this loop of quantification for quantification's sake and I think it can lead down a lot of different pathways. So we just have to be very conscious and intentional with the use of these tools. So I think for my part, the thing I'd like to emphasize here is that the measurement apparatus is not something that has an objective ground truth. It's something for our, at best, we can have our communities participate in and discover, contentiously, continuously discover what the right measurement process is for that community at that time in its life. And so sort of with that in mind, I'd like to pass the baton to Aaron to talk a little bit about what they're building and why they're building it at Governing. Yeah, thank you and I'm so excited. Shout out to all the projects that have already been mentioned. We've taken so much inspiration from all of them and it'll be actually really interesting. So at Governing, we're building tools to help DAO contributors track and manage their contributions. If you're here for the intro, you heard it, that Governing was helping make more democratic societies and people self-govern themselves in politics and it's actually quite interesting. That's actually where we started. We started by, we learned about DAOs, we learned about the power of online collective movements and we built these really complicated mechanisms on how people can self-govern their physical IRL communities, how you can take the power of online and apply that to IRL governance to power cities. And as we started to dive down that rabbit hole, we saw the problem that Michael actually exemplified with his first story, is that before you even get to these really complicated mechanisms, you actually have to distill that, there's this base layer of understanding what each other is doing that actually needs to be done. That until we can actually understand what each individual person has been doing to collectively contribute towards a goal, none of these other complicated things actually matter so much. And that's what we've actually transitioned to really working on and we're experimenting with a lot of specific DAOs, going down to the actual DAO level and allowing them to build a contribution primitive, a contribution language. And one of the things that's really interesting that Michael just pointed out is, I remember we started working down this, we transitioned to say, okay now we're just focused on managing contributions, building an objective primitive layer of what people have done and making it very trustworthy. And we spun up a research group, I want to give a huge shout out to Christine who's in the audience who's been leading this research. And we, Zargum, Ellie who's on the call, Livia from TEC's, Nick Vincent who's not here right now. And I went to them, I was like, okay, let's create an objective language that everyone can use and the first thing, it was the craziest thing, the first thing every scientist, academic said to me was like, there's no such thing as objective truth Aaron. It's all going to be subjective. And in fact, and this was the crazier part to me, I was like, well, why don't we try? And it might have even been Zargum who said, it was like, we actually don't even want to. We don't want to force everyone to use the same language. The what is so beautiful about this contribution primitive, this beautiful about people contributing towards a common goal is the flexibility, it is the nuance that people bring to the topic. And what we've been now experimenting with is the ability for communities, for individualized groups to come up with their micro language if you will, contribution primitive, and how we can then take all these micro contribution languages, these primitives that people are coming up with, and create a macro level one as well that everybody can create, right? And the ability to break that down or roll it up at any kind of as you move the slider is really something unique and beautiful. Yeah, so I would like to sort of like highlight this then that in order to get to something that is sufficient to be useful, we need something a bit like natural language where you adapt your dialect. There might be a shared language. So, you know, we might imagine that the sort of schemas being built into the government software represent a, you know, let's just say it's English, a natural language that all of the communities share at some level, but that rather than saying why aren't you speaking the Queen's English, you're actually saying look, like the emergence of dialects that are specialized to the context that they're used in is unnatural and appropriate, and that furthermore those things will evolve over time, both the sort of central sort of shared language, the core canonical one, and the dialects themselves actually could be expected to evolve with context in order to remain in alignment with what the community members actually need to express on an ongoing basis. And so I'd like to use this as a pin to sort of come back to like, what is the purpose of this software? Well, it's to empower people to drive outcomes. We are going to come back to this sort of meta-measurement problem and I'm going to tell another little story and this one is from my time in undergrad in a robotics class where I was for the first time trying to program a robot and it was supposed to get into this little white circle on the ground and I'm looking at my computer where I've done the programming and it's got the code and the computer is telling me that the robot is in the white circle. Like, according to the data, the robot has exceeded. Yeah, I'm looking at the robot banging into the wall, going like, huh, how am I going to fix this? A lot of the time we're only ever looking at the data in the computer. We're like, yeah, we did it. Like, number go up and then you're like, wait, but if you actually ask people, that's a bit like looking at the robot and being like, yeah, it's not working right. And so I'm actually going to throw it back to Ellie to talk a little bit about the hybrid methods that we've been developing with MetaGov and elsewhere to try to bring some of this observability of the human dynamics into these digital spaces. Because to be clear, it's very hard to look outside the system when you have a digital-first community. Like, what is outside? Like, what does it mean to look at the robot crashing into the wall? And so, you know, we need to ask people. Like, but what does that look like? So, Ellie, can you tell us a little bit about ethnography in digital spaces? Yeah, sure. As everyone here knows, a lot of these communities happen mostly online and in applications like Discord, which can get unwieldy and confusing even if you're trying really hard to follow along. And for someone like me who's trying to observe that and document it, the challenge is quite immense. So what we did working with MetaGov was to create a Discord bot, which we called the telescope. And the idea of this bot is that anyone within that server can put the telescope emoji on a comment that they think is important for the researcher to be able to see. And that, importantly, that bot then sends a message to the author of the comment and says, do you agree to have this included in the data set? And do you want that anonymous or attributable to yourself? So there's also a kind of ethics within there that a consent process, which is the other thing that can make doing research in these spaces incredibly difficult is how do people know that you're actually doing the research because you don't know when they've come in or out and you can't constantly be telling them that you're a researcher observing them. So we did it mostly for those content reasons, but then it became clear very quickly that what it did is it gave people within that community a very tangible way to participate in the research themselves. And it created a channel inside the server where they could see the things that were being telescopes and then, interestingly, they, within the channel that was kind of displaying this feed, people started discussing events that occurred in the past. So for me, that was a really interesting process. And it was, I should say, source credit is quite a small community and they're very active and they have a lot of attention to their own tools and bots. So maybe it worked well with them for that reason. But I do think that as even as qualitative researchers, it's important for us to be developing tools that correspond to how those communities work and enable them to engage in the research more rather than being the equivalent of the white researcher and going and living among the natives, which is otherwise how it often turns out. And I think that the value there also is that you have something that you can triangulate with other methods, things that might come out of the, from the algorithm itself, in the case of source credit or whatever particular tool they're using, that give you information. So I think the intention going forward is to try and figure out how you take the qualitative information, how you take the kinds of things that Aaron's outlined in terms of what people are putting in and saying they're doing and use that as a feedback mechanism itself and to make something like humanity's research more active and valuable inside communities. Thank you very much. So we have about seven minutes left and one thing I really like to do with panels when appropriate is take questions from the group because I like to kind of keep things participatory. So I don't know, Eugene, if you want to facilitate, can we take a few questions from the audience? Thanks. I'm Will from Protocol Labs. Aaron, I'd love to, if you could hone in a little more specifically on all this applying to governs work specifically, like focusing on campaign finance and how these concepts have evolved in your work. Yeah. So without taking up too much of the time, specifically we started in trying to democratize campaign finance. That's where we started. To do that we said, okay, we have to build decentralized lobbying organizations, decentralized political organizations. What we realized is a lot of these governmental political organizations, what makes them so unique is they actually bring together a lot of disparate actors, stakeholders that do a ton of different things, academics that do research, politicians that do bureaucrats, they understand the constraints of the system, they bring in people to understand the problems that they're feeling, they do studies, they bring in operators, government officials, it's a bunch of different stakeholders. And to try to make these things work, what like these really good political organizations do is they coordinate all these people. Like that's what they do, they essentially coordinate. And one of our first principles is we didn't want to be a lobbyist ourselves, we didn't want to become a centralized force ourselves. So we tried to develop a tool to help allow all these disparate stakeholders to better coordinate amongst each other. And in building that tool we realized that's the tool that needs to exist. It needs to be a way to track and manage the contributions you are contributing to a network, ultimately trying to build a contribution graph the same way the source card is trying to as well. How do you see how different participants actually contribute so you can all work around them? And so we've taken all that work that we've done, we're not really focusing on that contribution graph piece. And before we go to politics, before we go to government, we're starting with DAOs. We're starting with normal organizations. I actually want to give a big shout out to a DAO called DreamDAO who does what Ellie was talking about earlier, documentation very, very well. Because it's made it really easy, not only do they document the way they contribute really well, their output is all documented too. And so we're building out this contribution primitive specifically with DAOs now, but we believe online organizations are going to be what eats the world. And as that goes into more IRL government's communities, we'll be able to take these tools with them. Next question. I'm a huge fan of user research. Hi, I'm Theo from DAO. Personally, a huge fan of user research, user centricity, and I really appreciate the work you all are doing. I would say, as though some people say Web3 is such another technology, and if you ask people what they wanted, you know, when the horse switched to the car, they would have said a faster horse. So what is your take in the light of user centric research, whether the user really has an informed opinion about how these systems really work and should work? I'll take that. I think it's important to understand that just because people have the opportunity to participate in governance of something, that it's also not totally fair to obligate them to. So my preferred analogy relates to things like say the transportation system here. It's a permissionless public infrastructure from a usage perspective, but it's permissionlessly provisioned, and as a result, we have a degree of, let's say, technocracy around who has the right to actually say design and build a bridge, a road system, et cetera. So by taking a more historical infrastructure perspective, we start to realize that at least at an abstract level, the kinds of governance patterns that we need are not new, but on the other hand, they could still be viewed as very new in the sense that, let's say, some high-test steel makes it possible to build a kind of bridge, a length of bridge that was not buildable before. You might still need to develop the expertise to apply it and go through a lot of cycles to determine who's going to fund and that it needs to be funded at all or have a participatory process for determining where it's being built. And so people could be involved in some critical decision-making that's relevant to, say, purpose or to desired outcome, while at the same time not pressing all of the technical details into the hands of the end users. And as we think about how to factor out expertise from the consequences of the application of the expertise, I think that's how we have to start to deal with any emerging technology that's application-pursuant to some public interest. And one thing of you let me add that I'd actually want to give a shout-out to Jeff and the Praise Project, something that we learned from Praise that was really interesting. It's a little bit different than Coordinate. If you're all familiar with Coordinate, the idea of giving is that Praise recognizes, like, you understand work being done through the eyes of someone else. You say, I think I'm giving you praise for this thing that you've done. And you can look at how other people view the work being done as a roadmap almost to, like, what should be getting innovated on, right? It's a new way of saying, like, hey, not so much thanks for a faster horse, but thanks for getting me here faster. And they're going to be like, oh, that's the thing people want to see. So I think Praise is, like, kind of this unreal unlock for us, at least, so. Hi there. My name is Ursh. My question is, in this form of digital governance, what do you see as a major risk compared to traditional governance? And do you think it's susceptible to similar bureaucracy and echo chambers that exist in the traditional setting? I'll give a short answer and then pass it off, but I actually don't think it's very different. I think that whether you render your rule sets and your processes for coming to collective decisions in legal pros, or whether you codify them into software, you're ultimately just making sort of information processing systems and that those things degenerate based on either inputs that are used to skew the system rather than used to help the system work as intended or that certain assumptions get entrenched and then they come to be no longer a good fit for the system in place. And although those are very abstract statements, when you zoom out enough to think of these as information processing systems, I don't think that the material they're rendered in really changes their failure modes very much, but I actually suspect Ellie has a probably more concrete opinion from her experiences. Look, I think a big risk is... We need to understand and sort of relate to the previous question. People are coming into these groups with history and with experiences of workplaces or being in the public or whatever it might be that are not necessarily positive. And when they then come into these radically different ways of doing things, they may replicate some of those behaviors. So that might be, for instance, not understanding that they can collectively change the system to suit their community or it might be which I think we've seen across a number of projects where the tools themselves can be easy tools of social manipulation, whether intentional or not, often people don't even realize the power that they have over others. So I think the main risk is that a community really needs to be able to hold an understanding of what its intention is, what it's trying to achieve. Alignment was the word that came up a lot in the source code interviews that I did because the community ultimately became about things other than developing the products at times. And all the developers didn't get rewarded or their work wasn't as those crazed for one of a better word as much as it should have been. So there are many, many risks and the only way to avoid them is to have a kind of hyper-awareness and to build that into the processes. Well, thank you very much. We're out of time. So I'm going to give the floor back to Eugene.