 Thank you. Welcome, everyone. This is the weekly TSC call. I recognize every name on the possibly so I assume, you know, the drill. This is a public meeting. Everybody's welcome to join at two conditions. The first one is to live by the antitrust policy, which keeps us out of trouble and the code of conduct, which you all know and love and live by. With that taken care of, we can move to the agenda. In terms of announcements, there's not much this week. We accept to remind everybody of the weekly newsletter, which of course everybody should make an effort to try to take advantage of. Is there any other announcements anybody wants to make at this point. If not, we can move forward. Here's a whole bunch of quarterly reports that are due and now overdue. We have one that was actually submitted. Thank you for doing that. The Bezu team posted a report. There was one specific question they raised, which I put on the agenda. We will talk about it separately, which has to do with DCO. And other than that, I didn't see any other issues. Grace or Daniel, is there anything you want to highlight beside the DCO issue? Nothing on my end, you know, just a typical quarter with progress being made. So all good things. But yeah, we can talk about DCO in a bit. All right. Yeah, just DCO. Okay. So again, I mean, for those involved in those projects, Quilt, Explorer, Avalon, all of those are due and overdue for several of them. So please get to it and get the word out if you know people who should have done it. All right. The next report due is cactus by the way. So be ready for that one. It's due for next week. So there's, I didn't see anybody raising their hands. So I assume there's no question for the Bezu team either. And we can move forward with the agenda and get to the crux of this meeting, which is a presentation by Silas on the vent. Silas, you're there. I saw you. I haven't heard you yet. Hello, I am. Hi, can you hear me okay. Yes, I can hear you. Thank you. Welcome. So you have slides you want to share. Yeah, just just a few. Let's go. So you should be able to show your screen and begin. There you go. Great. Okay, so the background on this is that event is formally a separate component that got merged into the barrow code base. It runs its own service. And it has had some new features added recently. So first of all, what is it? It is a SQL mapping layer. So I think pretty much anyone I know who's worked with the blockchain has ended up building something that does something a bit like this. But the basic idea is it takes data that are in your smart contracts. And it maps them into SQL tables. And those tables are treated read only. So it's a query view, essentially, over the data. And it's not all the data will come up to how you define your domain model. And it supports burrow. So burrow has a GRPC event stream. And it also now supports Ethereum. And it's able to do that because there's a common definition of events shared between burrow and Ethereum because they're both EVM based. So I mean, broadly, the idea is that once you're into a database and Postgres and SQL are supported, you can query much more quickly than we could do natively querying the Merkle state of the chain. So the basic idea is that you emit some facility events. So if you're programming facility, events are filled a few niches, but they're essentially they operate as things that you can fire. They're serialized as if they were function calls. And they are stored in the blockchain state. They're actually much cheaper in terms of the gas cost to store events than to store in contract storage. They're also a lot nicer to use in some ways because you can't actually get the return value of an externally called function instability. But you can see the events that were generated in a call path. So it's quite a nice way of providing a coarse grain execution trace. And that covered with the fact that it's a lot cheaper. You'll find that some contracts like the exchanges and so on have most of their state in events. The slight caveat to events is that in principle, the chain is allowed to clean up very old ones. In practice, I think a lot of chains don't do this. So vent will sit there and listen to these events and it will create SQL tables for you. And at any point you could listen to all the events again and get the SQL tables back. But I'm jumping the gun here a bit. So you emit facility events. You define some SQL table projections. When vent starts up, it runs as a service, it will read the projections, build tables that map those and represent those projections. And as it runs, it will take the stream of events and map them into rows of the table. So just to give you some flavor, perhaps people haven't written facility here. This is what an event looks like. You've got the type, the indexed column describes that certainly in Ethereum land, an indexed, an indexed field I should say right in the column is something that you can filter by a topic branch. So the low level op codes for events are these log zero to log four op codes. And these are able to emit an event with a different number of what they call topics. So there's typically a bloom filter over these in say mainland Ethereum. So if you put anything into an indexed field, you can get a quick quick access over a large range of blocks and pull out those events. So that's useful. Obviously, if you're looking for a meal in a haystack amongst all the different contracts, you can filter on the contract address, but also on the on the field. So there's various design patterns using these these index fields. So there's an update event there and you can call the events, whatever you like, but it's indicative this event will cause a table to be updated. And the one below is the lesion event, but I'll come on to the two modes that event can operate in. So projection looks something like this. You have a table name will be created. Now you can have multiple projections going into the same table. And you might want to do that if you want to give them a different filter. So a projection can have a filter. There's a little peg, Grammar query language here that allows you to filter on events and other metadata coming through. So you can pick out particular events. You can aggregate multiple events into a single table. And you can use multiple projections to do some quite interesting things. There's kind of a bit of an implementation detail that leaks through here. So it's something we could tidy up a bit actually now that we have some extra metadata, but the delete marker field. What this says is, if I get a field whose name is underscore and it will delete. I'm going to treat that as the delete. So we basically have the crud operations here, because when we get to the modes, you can operate in a in a mode where you update tables, the upset mode or the log mode. The primary keys determine here what will happen. So if we get a match on primary keys, what we'll do is we'll do an upset to an existing column. So this gives you the kind of basic, the basic things you need to map stuff going on in a theory and base chain to stuff that you can understand from this way we use it a more traditional JavaScript type script based API that spends most of its time looking at the database. So that's projections. And then you get tables. So this is a simplified view of the tables you might get out of the projection I just showed there in the events. Chain metadata is quite abbreviated and this is quite a powerful feature of event and the fact that this is based off of blockchain has this, this heartbeat and total ordering. You've got things like the index, the TX index, the height, the transaction hash bunch of metadata often bigger than the table that describes the last update that happened to that row. So that can allow you to do some interesting things I'll come to but perhaps actually reorder this so yeah in terms of the modes, if you give some primary keys it operates in what we call view mode. So the view is a is kind of a realized view over that projection. And it will it will upset those those rows where there's a match on the primary keys. If you give it no primary keys and operates in a different mode which is an append only mode, which is very nice for event driven systems and creating a log. So for example, we use this to build something that's a bit like, you know, if you think about what Kafka does it uses zookeeper to have a to do consensus basically, and that allows it to guarantee exactly once delivery with crash, crash tolerance by using the fact that the blockchain is done the hard work of creating the, the sequence number. It's a vector of high TX index event index. And so that's what we often use the log mode for. Like I say, switching between these modes is implicit in whether you've got the primary key field in the projection here so for example this is a view mode projection. Yeah, so just about the domain modeling like the most obvious analogy would be a object relational mapping. Now this isn't something where we just take every contract and take all of the data stored in a contract and put it into tables. It's an explicit intermediate layer of modeling that we're doing. So we're defining some events that we sort of care about emitting which we might listen to in other ways, not just built for building tables but they kind of define the core state machine that we're representing. And then we put, we're mapping that stuff. So I don't know, not exactly object event event to relational mapping. But it's kind of interesting to work with that layer of indirection. I found just because it kind of makes you structure your domain modeling quite a nice event driven way. So features, yeah, we've got what I call block stamping. I'm not sure that's a term or the term, but that's just the idea of the metadata. You've got a very strong vector clock running alongside every, every, every line in your tables and you can do useful stuff with that that is kind of the benefit of having this being driven by a blockchain. So you can update projections if you want to redeploy the source of truth is the chain. You don't have to back up your database you can if you like, but it can pretty quickly chew through fairly long chains, lots of events and rebuild the tables. Now if you want to change your, your table model, but keep your event model the same and that's perfectly doable you can update your tables maybe split into separate ones, whatever. So there's quite a lot of kind of nice refactoring you can do using the projections as leverage, which is somewhat more automated than if you're doing database migrations, for example. And you can rewind and replay state. So one way we need to use this is for chain reorganizations on main and Ethereum. Or for short-lived folks, we need the ability to go back to previous state and that's all embedded we can replay from the chain itself. And there's also a log structure in a system table that can allow you to do that more quickly within the database. Yeah, there's you can do deletion the crud stuff I've touched on. So we've got Postgres and SQL light support and SQL light support requires C go so you are pulling in the C ABI into your bills but if you have to do that it's quite nice embedded option and Postgres is just great. We also have support for Postgres notifications. So if you look here, there's notify arrays, these elements of those arrays are the channels and that says, if there is an update to this field, emit a JSON serialized version of the insert of the upset that happened on these channels. So this allows you to have listeners when they're attached as Postgres clients, which can be useful for a bunch of things. And yes, the new thing that kind of prompted me to try and bring it to a bigger audience was that relatively painlessly we've been able to add Ethereum support and the way we're trying to use this is playing into kind of beacon oracle use case where we're listening to some contracts on mainline Ethereum and building tables. And the nice thing is we've been able to blend our borrow and our kind of proprietary API and Ethereum into a single kind of domain model using projections as the intermediate point. It works very similarly, it doesn't have a gRPC stream so it has to do some batching and stuff but it uses the default Web3 or the standard Web3 JSON RPC and the call is this ethget log so that will pull down some filtered logs. And we're able to define an interface for a chain that is, I mean it does have an Ethereum core to it, but it's relatively minimal that allows you to add support in vent for other chains if they could support something that was sufficiently similar to a server tier vent. Another note on synchronization, so when we're making writes into the chain and we're reading from the database, often what we want to do is make sure that the last write that we made has hit the database, doesn't take long, but you can have synchronization issues if you don't check that. So vent and the JavaScript library that vent has ships with a helper for this which basically will, once you attach it, it will wait for a vent to report that a high watermark in terms of block height has been reached before it returns it will block. And this gets organized into a contract here so with this contracts.do call, this lexical block function inside will only exit once the vent is synchronized to whatever the return value was from the last contract call. And that's being captured kind of implicitly by this vent listener but basically the point is there are some helpers that make sure that you can wait until the last write to the blockchain has hit the database. So yeah in future work. It was kind of pleasing to see how easily the chain interface dropped out I was tempted to show it was big on the slides but you can see it in the in the vent folder of a burrow. It would be kind of interesting, particularly with other EVM based chains. I mean it would work out the box now with anything that supports web three. There might be more efficient RPCs available but it would be interesting to see it work with other chains. So I mean it should work with Bezu now for example, without any more work but yeah, seeing the chain interface implemented would be would be interesting you would need to have some comparable thing to an event, but that potentially could be mapped to a function call. So some other work yet supporting things like chase on complex process types like arrays and objects and stuff like that would be quite useful. Also things like being able to generate views that are based on multiple tables or generated columns and include them in the projections as a bunch of things but it's so far been quite a practical tool is what we've found. I just wanted to kind of make people a bit aware of it and I'll share these slides. But yeah if you're using web three anywhere, please consider if you have a need for SQL mapping. All right, thank you Silas. So I mean, again, I mean the reason we take the time to do this is not just, you know, for on culture, although this is always good to do. But, you know, although this is a component that was developed as part of the project, the belief is that it might be a fuse to other projects. And so, you know, Silas was hoping that by socializing a little bit more, letting people know what it's about. You know, it would trigger interest and possibly collaboration with other projects that could pick it up and use it. So, is there any questions from anybody for Silas or comments I mean any reactions, anybody saying, yeah, this looks cool. Maybe we could use it. Um, this using that mean we have to use the whole bro package or is it something smaller that falls out of bro. Yeah, it's a good question. So it's, it's, there's no code file between the other borough code so that there's some shared packages that vent use uses that live in borough, particularly things like the ABI, which is the theory maybe I that use is to run borough events. It is a completely separate the operated thing so it runs you run borough event start. So, yeah, you caught the binary size of borough, you shouldn't be exposed too much to bits of borrow you don't use you don't have to run borough for sure. It runs as its own, its own path. Like I say, it was, it was kind of more of a modern repo play to bring it in, because there was, there was tended to be quite a bit of dependency between the two. It's not impossible to spit it out. It would need to justify the added development effort, but yeah, I mean it's a pure go binary unless you build the sequel light so hopefully it would be relatively like low dependency outside of that you know it's a single static binary doesn't require any other borough stuff. So what I'm asking is I think that there's probably space for hyperlegion to grow beyond just pure DLTs and building stuff that help people who use DLTs do useful things, independent of the DLT. So I see you have mentioned that you have a change on future work. So I think this is this is interesting. I don't know if I'll have time to help contribute to it, but I think directional wise I think it could help. You know, it ease the day of blockchain developers who might need to integrate the sort of recording the reporting functionality into traditional systems. I think there's a lot of space that we as hyperlegion could look into the to help the developers do their job without requiring they go straight to the DLT for their work. Yeah, that makes sense. I mean, the area that's kind of interesting to me, it's quite early in describing it in this way, but if you think of layer two or state channely type things, that's kind of how we're using borough here. So if Ethereum is our main net, we are keeping a bunch of our state off on a side chain and we've got this at the moment one way communication. But you could consider borough running here as a kind of state channel. So it's kind of nice of them to, I think to co evolve, but they are, yeah, they're quite distinct in their dependencies at least. Yeah. Layer two super hot in Ethereum right now so I'm very familiar with a lot of those questions and concerns to be honest. Yeah. We do have a question from chat from Greg to get says it's Ethereum centric isn't it. It is a theorem centric in so far as the events. So, it's hard to say that you couldn't codecify the events and kind of abstract them. I don't think that would be desperate too hard. It depends what the concepts are, but in other chains, I'd say that fairly cool to design is the idea that there are events. Right now, those events come from two different types to two RPC calls, but they are decoded using the same the same encoding, which is, which is the event. The I encoding from, from Ethereum. So, so there's extra there's a bit of extra work to do if you wanted to put this onto a chain that didn't serialize its events like that. And, like, no other no other sane chain would serialize its events like a theory unless it had to so. Yeah, but I don't think that would be that hard. If you have an event notion to map it to this. Oh, thank you for answering my chat question. Anyone else, any questions or comments. Looks like not grace. Yeah, just wondering if you will share the presentation and like the link to the repo and stuff that I think definitely basically like to explore it a little more. Yes, yes, I will. I'll push, I'll push the presentation up and I will drop it in the TSC list and chat in a sec after this. Yeah. And then they put the links from the agenda as well on the record of the meeting so that's it easy for people to find it. Yeah, if you have a PDF. Yeah, email it to me, or just drop it in the, in the TSC minutes that'd be awesome. Yeah, no I can generate a PDF of this. This is, this is what you're looking at now is HTML if you prefer. Okay, thank you. All right, thank you. Sounds like we're done with this. Thank you Silas. All right, so with that done, we can move to the agenda item, I mean the agenda section about discussion items. So the first one is in relationship with the report we got from Bezu, in which they brought up the issue with the DCO which they see as a barrier to contribution. So, I don't know Grace or Daniel wants to speak to this. I'll take a stab at it Grace might have a bit more context. But a lot of what we're seeing is there's some people who come in for their first contribution, usually the docs read the repository. They'll come and they'll do a contribution. They won't find tooth comb read contributing.md and read about the DCO and the flags on CLI because probably they're probably using something like visual studio code, or IntelliJ to just contribute their stuff easily. Or even the editor instead of GitHub, which doesn't support sign off. So they'll do their contribution, they'll get their PR and then we'll get a big red X to it next to it that says, you don't have a sign off by line and your contribution PR. And, you know, one of the docs maintainers will come in and say, hey, you know, you can run this and you're on your side, it'll update it, you know, it's got force push and all that other stuff. And say, you know, here's, here's, you just need to copy and paste this line into your CLI and we never hear from the contributor again. To get, you know, to add this a little bit of a barrier for some of these first time contributors is honestly too much. So it's really hard for us to grow out of anybody that's not a contributed enterprise developer, who is, you know, very dedicated to try and get their changes in and push it through to the end. So it shortens it makes it the tip of the funnel of getting new, you know, the open out of the funnel closes a bit for trying to grow new contributors. If their first experience is to say, hey, you didn't do all the style points right so we're going to ding you for it because we have to. So one proposal that crosses my mind that might be useful is in the because basically we don't do merges. We do squash merges where we take all their PR lines, we squash it into a single merge. The DC requirements could be relaxed that if within the PR if they miss it that's okay as long as in the comments. They say yeah this is my contribution meets the standard, then we squash merge it and put it in one merge, and it has a DCL line. What what lives in the final main line will have full DCR DCO entries for what is actually shipped and published, and you can go back to traceability into the PR and see the statement that yes this is all covered by this is not necessarily in every single get commit along the way that it's gets the file step. Sorry, Gary. So, I still don't fully. Well one, I like to understand the solution. What if somebody only had a single commit and use it actually signed off by, but I guess I'd like to understand the like I'm not sure I have a problem with whatever the process but are the, you know, if we have a clever solution or something like that that everybody else agrees to but have you actually ever talked to these committers and that's why they disappear because I don't get it to be honest with you right I mean look when I was a rookie. Back in the day, right, I you know it happened and I occasionally I still forget to do it sometimes I can mean, you know, contribute to new JS and a few other, you know, various other ones that all require sign off and they come in and I get that okay I pull it back down and I sign off and I submit it again. So, I just struggle with that. We have, and yes there's some of them that we have been able to get a hold of that said, yes this is a negative experience so I don't want to contribute anymore. And I think we need to be sensitive that some people are very enfranchised into get. And there are some people coming in that we're trying to onboard and bring into the system, who this is honestly their first experience and exposure to get. They're using their own internal stuff like you know whether it's per force or whatever CSV they're using still in their, in their corporate repositories. So, to people who haven't run into this before it's a very negative experience and really sour is our taste. You know as opposed to people have been doing it for you know year since Linux Foundation did it until 2004. So, in the case of a single repo we still make a single commit, it still becomes a squash commit, and they can add it in the line there and in the comment stream. In the PR they say yes I certify that this meets the DCO standards. So when we do the squash commit we make sure that I'm flying is in there as well. And in that case I would probably put my own sign offline because I got from them. I got the certification from them, and I'm the one also vouching for it for the DCO rules. I do. Go Brian. Sorry, hey, the down your proposal doesn't require changes to the DCO check tool and get up or is it simply a proposal for the way the plan and manage that process. So, um, it, I don't know if it'd be a change in the tool or the change in the integration for PR a DCO I mean we still want the DCO failure to show up just to give a flag and say hey, we need to have them DCO it. So if we can have a tool that could also check to see if there was a comment where they did the DCO line and everything for it. So there's tools he's like we don't know the tool but I think we could do it today just to say that this is not something that will block a PR, we can have the big red check but there's get confused where you can say DCO can fail and can still be checked and still throw a nice huge errors when it hits mainline. It's more like a warning than a failure and and so partly this is maybe a small tool change but your question about process is also is it okay if the maintainer accepting the pull request essentially vouches that yeah this is something that clearly the contributor is signing off on there they're making a valid contribution by, and it's small enough that we should accept it as kind of a judgment call in their part to some degree because they're accepting a bit of the legal risk I guess. But for a doc change. That's clearly trivial. This is like a mix of a tool change proposal but also a question around legal risk but at the kind of the edge. Right, right, mostly on the docs changes where we see it but I'm sure in the future we'll see larger ones come in where they won't say do you really want me to force push all these 27 changes because I forgot on the first line so there's it'll be judgment calls but it should be an option I would be my request. The legal risk is low and and so so worth doing if it improves the country contribute first time contributor experience. But let's just do anything that requires a judgment call the opens the prospect of somebody getting it wrong and in a big way so we just need to somehow have some guardrails or something. Something saves there. I have to say I'm sympathetic to this idea at the same time it's like, well they better learn to do DC or sign off. At some point they're going to have to learn or there's, you know, they're going to have a hard time everywhere anyway. Well in many places but let's go to the queue hot. Thanks. So I posted a comment on this about, I guess probably a year ago sometime like that. I asked about if we could possibly tie the DCO to an LF ID. And I think there was going to be sort of, yeah, there was a bunch of discussion on that and I was just curious if the LF has sort of any updated rules or comments, or any sort of updated process for using the LF for DCO. It seems like the simplest way to do it would be tie the GitHub to the LF ID, and you sign the DCO once and then you never have to worry about it again. Except now you, you're requiring people to get an LF ID, which we've talked about and you know some will say this is another bar you're putting in the way. But I think this is an easier barrier and it's, you know, I don't think this is a big barrier. I would agree with you, but I honestly don't know. I'm just trying to be the devil advocate there. Totally fine, but yeah. It was to solve a different problem, which is the pseudonym's problem related related, but yeah, there hasn't has not been any for moving on this just because I think we're still at the collective requirement stage. One question I have is if you would do a DCO sign up or you say that all the stuff you contribute from that point forward will be DCO compliant. Well, because the one advantage I see of having them a DCO on every contribution is you're saying at least let's go for this. I'm certain and they have to consciously think about it or, you know, have turned it on for their ID project. I mean that's that's one thing I think the lawyers might have issue with is that each DCO is a certification for each contribution. But I mean if we can if we can move it's where it's a CLA type sign up that would make a lot of people happy. Tracy is next in the queue. So I have a concern with the proposal, the first proposal that Dano made, which is, if you can't get them to come back and comment on the fact that they did the DCO the first time what what's going to make them come back and comment on the fact that I'm okay with doing DCO. I just feel like you're, you're setting, setting it up so it's that you're going to end up with legal concerns with the code. And, you know, who contributed what did they really sign off on this or not. It seems seems to me if we were to take this approach, I would recommend going to the, the legal committee and having them actually sign off on this being the approach that is taken. It just, it seems a little bit like there could be issues in the future. There's also a problem people who commit once they never come back you know DCO or not we have people say can you make this change we never see it again. But the barrier of getting someone to write a comment and get hub to say oh yeah I forgot to DCO it. I certify that this makes the standards of DCO is a lot easier than getting them to boot up a command line run a bunch of commands they've never seen it before and then do a force push on their own private repository. It's going to solve all of it but it's going to solve enough for the people that are willing to come back and interact with us to get frustrated because they're not get hub see a live wizards. Troy and then Sean has been trying to raise his hand is not able to. So, Troy. Okay, yeah I was writing comments into the chat that I thought I verbalize them as well. So my concern here is on the tooling side. If DCO is not a required check and then somebody just, you know, merges into the repo without a DCO on the commit. Does that mean the repo has to be fixed by the men, or are the repo rules also changing. I was also commenting that when copyright headers get missed by tooling. That's already an annoying problem. So, as much as possible I like these things to be caught by tooling. And I think the DCO is even more annoying than copyright headers because of the need to fix commits if that's what we're saying after the PR has been merged. All right, thank you, Troy. Sean. Hello, I just wanted to make a point that I've had a lot of discussions with developers about rebasing their commits and getting getting their signs of correct. Even developers who are really competent and know how to use, you know, how to write compilers. So this isn't isn't just new developers. This is people who just struggle with get who can be very competent. That's all I wanted to say. There are people small enough to work on compilers and can do git commit dash s. Well, no, I've often asked contributors to rebase the commits because what people tend to do is the right to some change, then they find the bug, they add another commit, and there's more changes and add another commit. And this is kind of like a pretty ugly. Get history. So I'll ask users to squash their, their commits into some logical commits. And rebasing is, is something people struggle with. Okay, but so that's the case where there are no solutions where they could put it in a comment on the PR with address their problem right. Yes, because then I would be able to do a squash merge. And you could, they would definitely know how to comment and say, yeah, sure, I'll put that in a comment. Yeah, exactly. Yes. And putting people off is something you don't want to do. You want to attract contributors. All right, Gary. I'm just going to say it again right I mean look we continue to, I don't know why we continue to think that we're going to make everybody have to avoid some of the things that you have to do when you're going to become a contributor to multiple many open source projects. I mean okay we moved from you know Garrett, which was good news right we get rid of Garrett because the one we didn't want to run into said hey you know github seem to be the thing that's out there that most other projects are using so we moved to that. We, you know, we obviously have always had this community that code is supposed to be you know under Apache license it has to be cleared. So we do that. So we have the DCO. Like, I mean look I mean if somebody finds something that facilitated I don't see how it helps me maybe you get one commit from somebody. What's going to say when they come back that you know they're not going to follow this, but they're going to still have this thing. I just struggled with this this whole thing and why is it always it's, it's, and maybe and I don't mean this as an attack. I think we've lost them in the past. And we just didn't know about it. But I like what is that special about the people contributing to best you that they don't want to do this. I, I apologize I can't raise my hand so I have to jump in here. This is causing me heartburn right now in fabric documentation. We had a guy come in who was all full of fire and energy to start working on the Italian documentation. I tried to help him through getting this this process working he grew frustrated I haven't heard from it over a week. So it's not just Bezu. But again my point still stands I don't get it. Look, it's a world it's out there. I don't know how to use it when I first did it I contributed to a project I did it I just, I struggle. There's certain things that you have to do. And they don't seem to that they're that they're that hard to me. I get the first commit thing I'm good you know that's the only that passes the muster, but just seems they're trying to baby and coddle people and they just have to learn to do and I don't find these are barriers. When you have an entire vibrant ecosystem of people who contribute to stuff. And they seem to get past this. Alright, so missing the people lower down they're helping them in guiding them along the way is what's going on. I mean, based on rice recent comments. Okay, so I think you know. Okay, so there's, you know, we can't I mean we have to acknowledge there is an issue for some people. It creates a barrier. I, you know, fundamentally I agree with you Gary. I feel the same it seems very low to me and I don't understand how people stop at that get stopped by that but you know if they do they do and if there is an easy solution along the line of what Dan was talking about. Maybe it's a it's a low hanging fruit we can just do it and we can all continue to do our sign off the way we do it, and have people who know better do the same. And then for the few people don't. There is a way around that achieves the same result, which is the legal protection we're trying to get, you know, I don't really care how they get that we get there right. So, I would suggest that, you know, Daniel you put together your your write down your proposal solution, if the way to open to make that automatic. You know, if we achieve the same goal of being the legal protection. I don't know why we would reject it. So I'd like to move on but I'll go to the queue we can finish the queue. Arun. Hey, so I just like the comment which Tracy has brought up. She's asking the solution could be to remind people when there is a PR as a template when we use get up to PR templates right. Yeah, and another question which I had was so I'm trying to search that in hyper ledger charter I could not find in the website so I'm probably looking through the wiki as well. Do we mention that the CEO sign off is must for PR so do we only mentioned that the CEO sign off is must for some it's in the repository. That's something open. And that's why this question was brought up, because what I understand from Dano is that they're okay to add the CEO sign off just that they're not okay to do it at the time of PR is raised. And they don't know what suggests. I mean, what from what I understand if I'm not misunderstanding is no says that maintainers would take responsibility of making sure that the CEO sign off is much before it gets merged. Did I understand it correctly. Yeah, so I think you understood it correctly but I think the one the one tweak we can't just put it in the PR template. Because it's not checking the PR text the template text. That's a separate set of text that comes in separate from the stream of commits that are pushed up as well. So they've made their commits before they even see GitHub to post the pull request. Now, if a sign off in the pull request would be sufficient to match the stream then that basically is what I'm proposing if a sign off in the PR statements or in the comment stream is accepted to cover the commits that are coming in from get itself right now the tooling only checks get itself. So I'm not saying I don't want them to do see it DCOs. I just want more flexibility as to where they can do the DCO. So probably, I guess I'll go with Brian's answer that we need and also Tracy project up. We need legal, legal teams recommendation to understand what should be the better approach. Yeah, and I'm happy to take a proposal and get internal LF legal guidance on it first to see if it's something that does really need kind of the hyper ledger legal committee to to understand and approve the LF internal legal teams really familiar with and and contribute all those all those mechanics so so I think I'm happy to carry that forward with them. Dan, if we get to a concrete proposal around this. Okay, so something on the hyper ledger wiki and I forward you the proposal page for it. Exactly. All right, let's move on for today. But I'm glad we had this discussion reaction item and the way forward. So, let's switch topic and go back to repo linter, which we're still experimenting with. There are more projects that I believe are trying it out. And, meanwhile, her eyes been trying to push it. And, and, you know, after a first attempt of going to every repo and add that to the CI, he eventually gave up on that and created his own. This report, which encompasses all the repositories for the labs and the projects. So, Brian, what do you tell us more. I, sure. Here's the repo. It's RL report. The code is in here, they're get up actions. I ran it on all the repos and both labs and main. I can point out that there should be an easy summary page. So, here it is. This is a very simple grip. There's nothing tricky here. So, it's here I don't run it. It's a get up action, you know, I just ran it. I'm not running it every day or anything, but it's pretty easy to do. So, in the meantime, I saw Troy was talking about repo linter and some of its deficiencies the way it is today, especially with regard to languages that doesn't necessarily handle well, like go. I mean, Troy, you want to speak up to this. Yeah, sure. I mean, it doesn't break the basic rules, but there are a couple of rules that are in that rules file that are clearly only targeting JavaScript right now. You can see it by the file pass and by the language check. So, for example, the copyright headers check is not actually being run on the go repose right now from the existing people to enter rules. So, in one of the repose I maintain I did add that. But then I also had to add some exception rules for like generated files and things like that. So, just FYI that when you see the repo linter report. These other languages don't have all the warnings that they would have had if those password properly enabled. But so this is why I was hoping that we would all contribute to improving the one config file that is shared by everybody, or that is offered to everybody I should say. And I mean, is there any reason we couldn't make some of the changes you've made for your own repo to everybody to the common file. Yeah, so I plan to add for the copyright headers, the go pass. I'd also suggest ignoring generated files, at least with some basic rules. This kind of relates to the DCL comment as well that I kind of prefer errors on copyright headers, but it's kind of very difficult on the very generic file like this to enable that which effectively means running repo linter twice on those repose like this very generic set of passwords and then maybe more specific. Yeah. Yeah, and we talked last time about looking into if there was a way to have a local config file to override the common one. I haven't, I actually meant to look into this and really forgotten didn't have time. And I don't know if anybody has looked into that. My plan would be probably have this very basic repo linter that's just looking for the basic files and then run it a second time. Unless we do like a script that's like combined two files together or something like that. Yeah, that's a good point. I mean we could do it in two passes if we can't have some kind of combined config file. Any other comments or questions. I mean this is the work in progress I just thought, you know what I did was interesting. And I wanted to highlight that so that we'll get a chance to stay on the stay up to date on the status of this. We have a few minutes left. I run brought up in the in the email in response to my posting the agenda to the mailing list. That he was hoping he would have time to bring up an issue, or it's actually with regard to the implementation of the decision that was paid earlier. The floor is yours. Thanks. Let's try to make use of it. No, thanks are not sure so we have a vibrant community in India and recently I mean we even formed a student society where all the students from across country can join in and then learn together or maybe we shares contribution opportunities so now that we have a big set of people who are interested in contributing in some way, it could be for multiple reasons that they're trying to build up their profiles and it could be whatever reasons right. So, one of the things which we are being asked regularly, at least in the last few weeks is how can they start and contribute. And most of them are saying that meetings, which they see for, let's say fabric or sort of or any other thing is pretty late for India timings. I mean, it was again brought up in today's call in Hyperledger India chapters weekly meeting, and then I told them here recently there was a decision done in the TSC, which was to aggregate all the issues in one confluence page per project, and separate them by by tax right for example good first issue, or this is something which may not require expertise on the project anybody can pick it up. So, when I told them that this was made, and they were excited they were all happy and happy to know about it and they wanted to start on it. And that's what I wanted to bring up in today's call as well, if I'm permitted. I would like to know next type of implementation and in fact there are people who are willing to volunteer if it takes any documentation or any kind of effort from them as well. Okay, thank you and you know I would add to this I mean the call I missed and where Tracy, the Tracy chair, you guys talked about, you know, different ways to try and break the silos and they were a lot of ideas that were thrown in. And the question then is like okay but we need to implement those and this is one of them right, and I think we kind of, you know fail sometimes on these things where, you know we have good ideas that get brought up and then nobody acts on them so nothing comes out of it. And so this is a call for action. They're actually ready to work on it, not just waiting for somebody else to do the work. What can we do, what do you, what do you think they need to start working on it as far as I'm concerned I'm happy to let them have a crack if they want to take it to do it. Sure, so I think the proposal does not mention details of how we should proceed. The question is that this should be done like for example tagging should be done. But how do we make use of it and how do we proceed from there is not mentioned anywhere. So probably if that is still an open question I would probably go back and probably send a mail to mailing list in India chapter and ask them. It is still open and these are the first set of action items do ABC kind of tasks. Let them come back with a proposal and and if we can share a confidence page space and let them identify a tool like figure out how to read from the top list them. So that could be first step and probably from there's few other people can get motivated and see okay these are the available contribution opportunities. Why don't I start with this issue I feel comfortable and go language for example. So yeah, that's what I was thinking if there were no implementation plans. I'm happy to lead require a room this is David so this is great I'm glad that there are some things that we do that will help address the time zone issue and help make community and you know another parts of the world, you know a different time zones feel more connected and more empowered to contribute. And I think going back to them and you know asking them to, you know, work out a prototype sounds great. I have a couple of thoughts and I can show you a couple of things that might be useful to share with them so if you want to talk offline I can help put an email together that you can send to the community there. That would be great. Thanks David. I just said it's a query which we can do on guitar. Yeah. And otherwise I mean, does anybody know already how to implement this. I understand there was no specific plans on how to implement it or who would do be doing that right. Well in terms of an aggregator I think there was something that we had done before that does integrate GitHub issues specifically tagged the ones tagged good first issue into a page on a wiki so I mean I think that could be a starting point to show what we could do it's only pulling it from one specific repo but I mean it does at least show how do you integrate, you know, tagged issues onto the wiki we just need to do that and apply it to more repos I think and then figure out the right way to present it on a wiki page so here. I'll drop that in the channel. All right, that's great then. So David will leave it to you to follow up with our you know he will follow up with you. And if you guys need any input from the TSE of course come back. But for now we'll consider that you know you guys are on point to try and, you know, make us make progress on that one. Yeah I'm happy to do that ruin whatever you like to talk just ping me. I think part of this is the mechanics of that part of it is also whether the TSE feels like asking or you know nudging or something more forceful than that. To the projects to identify such good first issues that's that's harder work that's hard for you know David or around or any outsider to do it's like maintainers really are only. But in theory if we can go ahead. Very good. I was just going to say in theory this is an incentive though for a project to do it if we could provide a pool of people who want good first issues that would be in theory you know hopefully projects will take advantage of that. Yeah, that's very much in line with what I meant to say it's like you know if we can show a look if you tag your issues in such and such a way, they will appear there and you know you'll have more people looking at them and they be able to find them and help out. That should be an incentive so I agree. Alright, I think we are just about on time so let's leave it at this I'm glad we had a chance to tackle that as well. I'm going to close the call. Thank you all for joining and we'll talk again.