 What I'm going to be talking about today is sort of like a culmination of a lot of conversations between people in sort of the Ethereum community and the Falcom community And we have sort of been working in the intersection of those two communities for a long while But today I'm going to be talking about really how we can scale coordination Using some of the tools are starting to come out of the crypto ecosystem All right, so one one thing if you remember back in 2014 There's this thing going on with people were forking Bitcoin and creating different sorts of blockchains because they wanted to play around with different sort of features maybe they wanted to have like a naming system or Experiment with some other sort of proof of work for doing something useful or sort of experiment with sort of Company shares and sort of proto defi as you haven't bit shares Or just like for for the walls like door scoring And Then there was this insight by the Ethereum community to create like hey We can actually not make a new blockchain for everything because it's very difficult and takes a lot of time Can we just like make it the whole system programmable and and that's sort of like the history of like how Ethereum was born And this has brought us a lot of really cool stuff and we've been able to like sort of do things on chain do sort of various forms of Coordination and collaboration with capital, but we haven't really been able to like massively scale collaboration Like have really big online communities all working together to solve some sort of big issue And I think some of the problems with this is that we haven't been really figured out how we can do fair distribution of tokens We have done some airdrops, but it's sort of like Not the best mechanism we've figured out yet, but we have been able to actually have doubts that have Sort of accrued large Treasuries and have a lot of capital that they can deploy But the problem in these doubts is that we have not a good way to sort of distribute them and We sort of have this bottleneck where we have to vote on consensus on everything or have like sub-douse but it's like still and efficient and Every contribution you as like a kind of immunity member will always be siloed to like one particular Dow Because you maybe apply it for a grant or something like this You also have the problem of like predictability If you think about how Bitcoin emerged in the whole sort of crypto industry There was this sort of Bitcoin block reward and everyone sort of like coordinated about the predictability of like hey I know how many bitcoins is gonna be in the future and so I can invest in my mining company or my exchange company or whatnot But you don't really have that in in if you're starting a Dow So what can we do well There is sort of like a new movement of sort of approaches to try to figure out how we can do more sort of larger scale fair distributions And I think of these as sort of like contribution graphs. So here I have some examples Source credit was like an early one that looked at Contributions in GitHub and some sort of online forums coordinate is like a system where you can attribute sort of a weighted score to all of your sort of People you work with in a community Govern is sort of like this tool that allows you to say what you contributed and then as test to what other people contributed And other examples like praise by give it snapshot also like does this like graph and sort of off-chain data Puts it on chain info queues like a github thing and also like web 2 communities are doing this sort of things Just one giant lab is doing sort of a community-based peer review system but all these are sort of like Trying to build some particular feature on this contribution graph And I think what we're missing is that we can actually do the same sort of move as Ethereum did but for a programmable contribution graph And so that's that's what I'm gonna be diving deeper into now And so I think that the first building blocks for this is something called an impact evaluator So I learned about this like like maybe three weeks ago And it's a really cool concept that basically allows the community to do of contributors to have some sort of shared objective measure that and reward it and do that sort of like in an automatic fashion and so the Sort of basic example of this is the Bitcoin block reward But you can in theory like run this over any sort of Data, I think the contribution graph is like probably the most Interesting way to do this over so you measure the world you evaluate it and distribute some reward And you can actually if you do this I would like some sort of set function You can have like a the same sort of predictability as you have in in a Bitcoin block reward And so this a few things to think about when I'm building this sort of system one Is sort of how we reward people who are? Using the system so one could be to sort of distribute a new token and that could be sort of like a Distribution curve like the Bitcoin use you can think of that sort of like as a fair launch We can do sort of like the same thing, but we govern and shares So if you're familiar with the Moloch dial framework, it's basically allows you to have shares in the dial But not be able to transfer them or you could have some sort of treasury that you disperse as well As you disperse from a Dow or from a personal treasury or from some some company and Then the second piece we need to figure out is where to store this contribution graph where to store the data and the contributions and so This is where ceramic come in the protocol that I've been working on So it's so the centralized protocol for composable data Essentially what that means is that it allows developers to create data models and build applications based on these data models And then anyone can sort of compose and use these data models in their application And so the data is like open and not locked in and There's a few features that makes ceramic really good for like as a base data layer for these impact evaluators And so the main thing is that all data on ceramic is verifiable What this means is that is time-stamped. It's actually time-stamped into the blockchain. So we have like a tamper proof Time-stamped that no one can sort of alter all actions taken by users are sort of a hash-linked log and they're specific for that user So you can sort of synchronize the entire data set and It's stored in IPLD so you can't like modify it without that being noticeable And the most important aspect is that the data is attributable so you can see which user Made which sort of added which data so you can see like who did what essentially? And so this is of course a force to Enforced by cryptographic public key signatures and and these these Attributions are tied to your Ethereum address or your falcon address or whatever other blockchain address It's like tied to actually an address that can receive rewards. And of course if we have this sort of Data layer that can also be replicated using using peer-to-peer networking Right, so so what sort of applications are useful when we're thinking about these impact evaluators I think I think any sort of data any sort of application that you can imagine could actually feed into a mechanism like this So some use like top of mind would be like project management Dealing like planning and stuff like that doing task tracking opening and closing issues Maybe have like a personal contribution Log or evaluations of contributions. It's also interesting. I think one thing that we probably want to move to verifiable Data structures as soon as possible is are sort of like Dow governance forums where right now they're run by Some guy that just like has a server somewhere and sort of like it's run by the community But you know the person who runs the server could probably like change what people are saying and things like that because all day there's not like attributable and And signed by the people in the community Another thing that I'm super excited about and especially if we can make these impact evaluators reward distribution Roar payout reports to people contributing is knowledge graphs So basically representing different sorts of knowledge and sort of scientific discourse in in these sort of like attributable verifiable data graphs Then another thing that's important if we are to build these impact evaluators is that we need some sort of trust seed we need some sort of root of trust that We can base the evaluations upon because otherwise we would be sort of You could do a civil attack on the system and you create a bunch of sort of bogus data that might be hard to distinguish But programmatically So you probably want to start with some some root trust the cool thing though is that If you're familiar with like quadratic funding and what get kind is doing they have this sort of passport thing And the passport is actually already stored on ceramic So you can potentially pull that in this like an additional simple resistance mechanism here And so let's start. So if you remember the impact evaluator function, it first looks at the world So let's define what the world is I Think it's roughly like this. So we have data apps on ceramic So it's a set of data models that is imported into the evaluation of the function and basically the data sets define what the Sort of rewardable actions are in in the system and it's sort of like up to the function to define How to distribute the rewards based on like what's in there? We have this trust seed which is this sort of root of trust in the contribution graph, which I just mentioned Then we probably want to feed in what happened in the previous round So we have basically the previous rewards in as an input to our function And this provides us with like an feedback signal that allows us to the system to like sort of understand what's going on Finally, we probably want to import other sorts other data sources as well And here it's important that it's not just some random data of the internet that we don't can't really verify We probably more care about like the state of some smart contract of Ethereum or something similar Where we can sort of trustlessly verify that that was actually the data that and the state of the system And so the impact evaluator function based on what we've talked about would basically look like this you have an input which is a DAG or list of CIDs of Ceramic data so data and ceramic as I mentioned is stored in IPLD We have this trust seed which is basically is the list of accounts and we have the previous rewards And the previous rewards is most likely something we want to store as a miracle tree So we can do some sort of like miracle drops on chain, which is sort of just like an efficient way of distributing doing sort of like air drops on chain and then the the return of this function should probably just like be an updated miracle tree of rewards and Potentially include like an updated list of this trust seed that I mentioned Okay, so cool. Now we have this sort of function. We can evaluate it But how do we actually bring the results on chain? Well, there's sort of three different approaches, right? So one is an oracle either we trust someone completely or we have some sort of like semi trusted tokenomics based Oracle system And of course, this is like the least secure Approach, but it's sort of like the most easy to get started with approach and there are sort of cool ways cool things since data is verifiable ceramic if we have some sort of like Court system there's this sort of clear us and there are other lurks of on-chain court system that oracles can like dispute to The judges in this court system can actually run the computation for themselves locally and sort of be informed of how to sort of What they should vote for in this case Better yet is to have some sort of verification game. So fraud proofs essentially that's what optimistic roll-ups use That's what true bit use Uses The problem with these is that they're very complicated to implement And so it probably take a lot of effort I think the most ideal thing is probably to have some sort of trustless proof like a certain knowledge proof where We can just like compute a proof over this and put the proof on chain And then we can sort of just like we don't need to trust that the computation was done correctly We can see it by the proof However, doing this over sort of large computations and like an arbitrary language is not super straightforward Um, there are some some efforts like lurk and ck wasm. There are looks promising, but they're probably like Ways out still Okay, so what would actually an mvp look like of a system like this? Um, I think we can do it fairly straightforward Sort of make relying on uh, well, I just go through it. So we have this set of contributors They do their contributions which get written into ceramic Then we want to compute the impact evaluator function. And so essentially a developer has written this function and they sort of deploy it to to summer and They run it over the ceramic data. This could be run in something like back layout, which is the It's like a computer over data system that operates on sort of ipfs filecoin data The output of the function could be a merkle tree that's compatible with something called astro drop Which is essentially a way to have an updatable air drop On ethereum on dvm And then we want to put the results on chain. Uh, we can do this to so a lot of those are already using nosesafe so it's probably like a sort of safe bit and nosesafe has this reality module Which essentially is an oracle that in the end ends up in the claros sort of on-chain court Once the this this route is on chain The community of contributors they can choose to claim their rewards whenever they want. So Maybe they do a bunch of contributions to claim words later or claim them early But yeah, this is essentially what what sort of like an mvp could look like All right, so now that we've seen that what's actually the next steps here well We're putting together a grant so Ceramic together with the buck layout team are going to have a shared grant up very soon And the goal of this grant is to build sort of out the mvp of impact evaluators And once we have this mvp there The hope is that anyone can program any sort of impact evaluator that they like over any sort of contribution graph and I think one of the first things that We want to do is actually build an impact evaluator that evaluates How good or evaluates contributions to the impact evaluator framework itself And impacts how useful the the impact evaluator framework is to the rest of the community So basically turning the feedback loop back on itself So we hope that can actually accelerate a lot of these efforts and finally of course like People contributing Useful data sets to ceramic in general will not only be useful to impact evaluators But the ceramic network in general. So that's something that is likely going to get to work as well If you're interested in this, please join our chat at Discord and we have an impact evaluator channel there so you can get involved Yeah, thank you everyone Did we have time for questions? Hi, thank you for the presentation. It was really nice. I'm very curious about what would if you have any thoughts on Use cases where this would be easier to implement because I assume the step of moving from a contribution in the real world to Putting data on on on ceramic would be hard for some use use cases like deforestation or because you need to have some trust on How good these records I'm seeing are how how truth How truthful it is. So I'm wondering if you have like any thoughts on use cases where that is not an issue and so it's just a piping that Needs to be done. Yeah, I think I think that most of the use cases that are actually interesting are sort of these subjective use cases where someone Makes a contribution that's basically a claim that like, hey, I did this thing and it's useful for this cost And then someone else would like evaluate that and like say, yeah, that was really good or that was not so good And that's sort of like the approach that we have seen in in some of these like early Use case specific things where it's based on sort of the contribution graph and it's rooted in a set of initial Sort of participants in the community so they sort of like have the ability to How do you say like Direct the attention of this community based on what's actually helpful to the community if that makes sense Yeah, so you sort of have some Evaluators that say, okay, this is good or this is bad Yeah, and you can sort of think of the the sort of emergence of this system as like There is an initial group that's sort of like evaluating other people's work But then you can think of it as like this emergent web of trust where If if the initial group actually Sees that a bunch of people's contributions are very very useful And these people that initially did a bunch of useful contributions might also be able to evaluate other people's work So it's not like there's a committee or something like that of evaluators, but it's actually dynamic and gross over time Thank you Hey Joe super super interesting. I'm Andreas. Um, I'm uh, a co-founder of legacy. I'm going to actually talk one after that So I'm not going to spoil the beans But one of the things we look at is to have essentially a data to govern our system All right And what we want to do is not have that driven by how much equity or how many tokens you have in a doubt We don't want to have it One person wrote right because it could be millions of people involved and that becomes unmanageable. And so the thinking is to basically Give out voting tokens By people who qualify and the way you qualify is by making meaningful contributions All right, I think what what would you just explain here would be sort of Exactly the the approach of of doing that Um And and I think the the biggest challenge I see is is in step one, right? How how do you measure all right and Yeah, if you have sort of a semi-formal system, you know, like github or so, you know, then You know, you have a you have a starting point, but uh, you know if if it's If it's different kind of contributions, right? um I mean Well, maybe the question is, you know, how Have you looked at different types of metrics for different types of contributions? At this point in time or is that still a research question for you? Yeah, so sort of what I want to achieve and do here is The the realization is that there is already like a bunch of projects have started exploring the space of Of these different things and different ways of evaluating um How we distribute rewards and so the sort of the goal of this this effort here Is to create a general programmable framework for anyone to start like permissionously is like create a new way of evaluating contributions over An existing graph or over a new graph of information uh, so so sort of like What yeah, what we're trying to achieve here is make the the framework And then allow a community of people to just like hey, I want to build an evaluated 4x and then build that but have that be within sort of like Um This framework that anyone can plug into if that makes sense. Yeah. Yeah, so the other question I have is if you know One of the dangers I see is for example, if you have Airbnb, right? I mean the host is doing a review and the guest is doing a review Right and and there seem to be the tendency You know, you don't see you really say anything really bad and they don't say anything bad So it's sort of all one warm fuzzy happy family And and you don't really get to Will criticism in that way or so you don't really do a true evaluation Uh, wouldn't be the danger here too. And how do you would get around that? That's something you have been thinking about Yeah, I think it's a good question. I think I think the way we sort of get around it is that We can have multiple evaluator functions running over the same data distributing like two different tokens, right? So you have this Airbnb sort of Review system and then there's like Airbnb one and Airbnb two that's like Is doing different evaluations since there's like competing networks over the same sort of data graph And so I think I think that's like how we can have sort of a more balanced view on what's going on All right, thanks so much. Thank you