 All right. Welcome everyone. So quick reminder of the antitrust policy notice. Please review that. Let us know if you have any questions. All right, moving into the agenda. So a couple things today. Event reminders update on the annual TSC election results. Then we have some quarterly updates both projects and work groups. And then if any other topics come up, let's dive in. All right, so on the event side next hack fest October 3 and 4 in Montreal registration link is in there. Please be sure to get registered if you plan to attend and again reminder this is directly following member summit in Montreal. That's the two days prior to that. The next thing is we are looking to keep the cadence going for the quarterly hack fest. We're looking at Asia Pacific. We've been in Europe in the US last few times. So we do have a doodle poll together for this. Let me just drop this into the TSC chat moment. Please indicate your availability there. We are trying to steer clear of Chinese New Year and some of the other industry events. So we have three weeks identified one in January, February and March. From there, we will hone in on location and exact timing and gets that scheduled so people have plenty of time to sort that into your busy schedules. And lastly, we'll just continue to remind everyone hyper allergic global forum December 12 the 15th in Basel Switzerland. This is our flagship event for the entire ecosystem. So a lot of technical content and hands on sessions there, as well as more business focused sessions. All right, any questions on any of those events. It's worth noting because I know a lot of people submitted talks. Oh, yes, we've been we've been processing the talks and we'll get back to submitters pretty soon. We don't want to give a specific date, but but we know people's travel plans, etc depend partly on it. So we'll hear from us very soon. Hello, it's murder. Did you have a question? Yeah, but it's a specific event. Actually, we promoted in the China community and also I saw there's some while already voting the time slot. Okay, excellent. Thank you for that. All right, any other event questions before we move on. All right. So the next one the annual TSC election, the election phase of that did conclude last night so with the agenda that went out I included the list of new names. So here it is on the shared screen, or you can look through the minutes, many similar faces, but we are also welcoming Mark and Silas to the TSC for the next 12 months so congratulations to those reelected and those newly elected. We look forward to continuing all the great work here. So the other thing on. Go ahead. Oh, and thank you to Jonathan Levy and to Greg Kaskins for having served in the TSC for the last two years, I believe. Absolutely, yes. And was there was there a question. I just wonder what was what was the problem. Can you elaborate on the problem that a lot of people did not receive the ballot. It's not entirely clear. So we've used this. The platform that the election was run on for at least the five and a half years I've been at the Linux Foundation across many, many projects. Both electing in the technical community and the boards, etc. And every time, you know, we'll see a couple ballots go to people spam spam boxes or different things like that, which is normal. This time. It was much more than we've seen in the past. So I don't know exactly why that was the case. But the one thing I will say is for we recent the ballots multiple times, everyone that reached out and requested one with the exception of Friday we typically responded in a matter of minutes and they all confirmed receiving the ballots that got And then in terms of a voter percentage, it was consistent with the voter percentage turnout from the last two years as well. So that left me feeling confident as well. So for everyone that dealt with the challenges, they're really, really apologize. But I do think we were able to get everyone taken care of in the end. Great. Thank you. All right. And finally also thanks to everybody who ran. It was great to have such a really large selection of candidates to choose from. Indeed. And then the last thing I'll say there is the TSC chair nominations will begin directly following this call. So email will just go to the 11 elected TSC members of you 11 if you're interested to run for the chair position. Please let please let us know within the next week from there will compile the nominations and do a one week. Election phase from that. And as a reminder, the TSC chair does sit on the governing board then for the next 12, 12 months in tandem with their term on the TSC. Hey Todd. Yes, one thing that was not in the, you know, rules or rules but guidelines you had sent out was when the election is final. So am I on the TSC effective as soon as you send the announcement out or is it is there a certain date transition time. We'll have it effective for today. It did. It did conclude last night so we should consider it effective today and then the new chair position would be effective two weeks from now, and therefore joined the October board meeting as well. Okay, thank you. Any other questions here. All right, sounds good. So with that, let's hop over to the various updates. So the first one is that composer anyone from the composer team on. And I know Tracy connected with them again last night it doesn't look like anything has made it into the wiki still composer team and Tracy did you hear anything back I know you had pinged again last night. I haven't had a chance to check my email this morning. I hadn't as of yesterday afternoon. I can quickly do a search and see. Okay. No, no response. All right. We'll, we'll continue to reach out there. All right. Hi, everyone. Can you hear me? Okay. Yeah, we can. I'll drop the link into the rocket chat as well. Okay, cool. Yeah, I will very glad to have been elected on to the TSC. Firstly, that was nice. So borrow. So on project health. Borrow has been going through via the agreements network, a succession of test networks that we've imagined it to be named T one, T two, and we're now on T three. And that has meant I've been doing a lot of firefighting but it also means has been quite a lot of interesting feedback from our network co founders uses a borrow in the context of a of an observable network that I can can poke. So things like realizing that we had like a 10 minute load time because we weren't lazy loading our Merkel tree. And that only really came up when we were looking at 1020 gigabytes of chain data over a two week spread. So that's been giving lots of useful feedback. Code wise, I think it's probably been like the best part of a of a quarter, at least three months spread straddling a couple of quarters, where I felt like I've still been doing a large amount of paying down of technical debt. That period is now over. And I'm actually pretty happy with the way a lot of the code looks now. So, so this quarter we've really been able to focus on adding features so things like our ETL system so we have a working progress that builds a sequel Postgres query store driven by execution events. It operates in a sort of kind of slightly Kafka style stateless manner so it'll pick up from a previous height and then build a table schema against a solidity contract. We've also got the basic governance primitive and a few other features I can move on to so in terms of features and code quite pleased with how things are going. But a lot of issues coming up that we just hadn't really seen when it was being used predominantly in one or two node networks. Now we're running with 20 in Kubernetes. And a lot of developers using it for prototypes. So that's been good. Community engagement is pretty good. There's plenty of people to chat to and bring up issues on the Hyperledger chat. What kind of frustration has been for a lot of our users which is kind of understandable but with the resources was hard to avoid is as we were consolidating our tools. A lot of stuff was fragmented. There was a lot of out of date documentation. It was quite hard to track various Cartesian products of stuff that would actually work together. So now what we've got is we've got a load of tooling that used to live in a monax repo is now all baked into the the borough binary. So we have our deploy tool, which used to be areas package manager used to be bars is now borrow deploy. And we were able to move that in because we developed a patchy to licensed ABI, which is a theorems binary standard for the EVM that allows us to actually pack function calls and send them in. So you know how borough start, borrow keys, borrow deploy, you can pretty much run the whole chain from that single binary. So at this stage, we're really in a good place to have a push on documentation because that effort is not going to be wasted when another refactor happens. There's going to be a lot more stability, particularly in the general outline of the project and the tools. So we certainly like some help from the Hyperledger community where possible on documentation. Another frustration has been last quarter I was fairly confident I'd be able to announce two or three new maintainers outside of Monax. Unfortunately, one of those developers that's got reallocated from working on borough and hasn't been contributing since and another two even more regrettably ended up forking borough. They started off making some significant changes to borough, which I would have quite liked to integrate in, but they started from a copy and paste. So we got them onto a main line, but the developers that were working on that really wanted to change a lot of incidental stuff. Borough is moving quite quickly and I tried to pull them back from the brink. They could still even be depending on us as a library, but it's a real shame because they were doing some interesting work around verifiable random functions to control the churn of validators. So that was a bummer, really. But still borough has four full-time developers from Monax and we'll be hiring more and also probably trying to get some of our co-founders from the agreements network to start making some cool requests. But I would welcome any advice or help on improving our diversity, maintaining diversity. Under issues here, yeah, so help with documentation, help with maintain as I mentioned. A couple of more general points. I'm not sure how neatly they fit into the update on borough, but one thing that would be a real multiplier, particularly for smaller projects. And I think more generally for projects in hyperledger would be some kind of shared testing framework or infrastructure, possibly even something supervised by like an SRE or testing engineer type person. So for example, something that I could spin up some of our Kubernetes town charts and do a load of load testing on. And if there was some reporting and some metrics, a lot of this stuff takes up quite a bit of time, you know, like I was setting up the last lecture setting up metrics years and so on. But it's actually very generic across projects. So it would be pretty useful for us. And I think for others, if that was if that could be provided by hyperledger on a similar vein, common release infrastructure would be good. So I've started pushing cross compiled binaries and I'm now pushing finally to the hyperledger Docker repo. But what would also be nice is if we had access to app or yum or snap flat pack repositories to push releases through. Because again, they tend to shift you off the main focus of your work. Releases releases been much better than they had previously. And this is a side effect of the that the refact had having been done. So we've made roughly a monthly release and that v0 21 release had a ton of new features kind of cemented in there. Stripped out our previous RPC layers. We now bring everything over to RPC, which has got rid of a lot of issues. So the release cadence has been fairly good and the change log is linked in the update. So in terms of activity, yeah, it's it's more than it has been, but it's not as much as I would like. We've had several pull requests and issues have demonstrated actual bugs. In particular, we seem to be unable to reset you in back to zero, which is a which is a strange one that we're looking into now. But that's good. These have been actually actionable compared to what we had previously been getting, which was a lot more of can you help with this? Oh, I don't understand because the docs weren't good enough, etc. So quick run for some of the features. We've now got a complete historical transaction so we can go back to any any block height and we can have like an execution trace of what the EVM run events are omitted. It's kind of a very like this is what happened object and you can query any range. We've got our SQL database mapping layer, which is kind of under development. There's a state checkpointing mechanism, which basically means that we can survive brown brownouts a bit more easily by going back to a previous block and catching up to network rather than stopping in a corrupted state. We've integrated our key service, which again used to be another satellite repo into the borough binary so it can act as a delegated key signer, both for our command line tooling and for the borough validate itself. We've developed the Apache to licensed ABI, which is now part of borrow. We've added Prometheus metrics and a profiling service for some debug. And then we have this governance TX, which allows you to do batch updates to as many contracts as you need at a time or contracts slash accounts. So this is the basic mechanism. It needs some policies and voting to be built on top of it, but it's a basic mechanism that will allow you to do network upgrades by vote. So current plans. So there are three buckets of work that I'm trying to focus on now in this quarter, probably the next one as well. One is chain stability. Some of this is chasing tendermen, which changes a lot and breaks stuff a lot, although they are trying to break stuff sooner rather than later and they themselves are starting to stabilize. It's also involves being able to make sure that when we do have an upgrade or some sort of a bug that we can actually get the application state out. So to that end, and using some of the version history that we know how we're looking at a kind of version rollback. So get reset hard, get rebase effectively on the chains, as well as emitting our state into a database agnostic form. So like a dump, dump restore kind of functionality. Then in terms of the issues we see in production about corruption and sharing system volumes, self-healing and Kubernetes. As these things come up, and it was for example issues with connectivity, weird time-outs, adding a lot more diagnostics and just getting the things so it's stable and when it does break you understand why. The second bucket is governance. So currently the governance is based on a very crude, a single root permission, which you can wrap behind multi-sig. What I like is for these governance transactions to be transactions and stasis or proposals that can then be voted on by either the network quorum of validators or by a separate quorum or kind of any manner of like voting system to say that this network change, this token redistribution or this code deploy should happen. But that's all contained in this same mechanism. And then the third bucket is looking into interacting with other chains. So in particular with borrows underlying EVM compatibility and also attracting Ethereum developers. It's very useful to be able to anchor certain pieces of state on a public chain. And in our case, the obvious one is Ethereum, such as the validator set so that if you can have a reliable source of the validator set, that's a way of implementing clients and also for performing escrow in terms of token. So if you want to perform payments or if you want to establish a validated bonding token by a payment into an Ethereum smart contract, we need some integration. There's a lot of projects along this line. There's Plasma Cache. There's a project called Loom, which is meant to be open sourcing this month. So we're looking into ways that we can do cross-chain communication. And then in the background we have Tenements Cosmos, which part of that specification is an inter-blockchain transaction that gets escalated to a hub blockchain. So that's going to be increased to the next quarter, but the one afterwards. I've mentioned about maintainer diversity, contribution diversity, again, slight uptick, but not much. We've had three to five contributions in terms of, well, I think like three PRs and then a couple of issues that have been pretty useful, but nothing huge. And as I say, I think that's slightly down to the changing, the shifting in the code that we've had. It's also partly down to I think we have a lot of developer users who are building prototypes and they're building projects where the long-term stability of their projects is not assured. So they're not really wanting to pay back into the framework perhaps yet. So yeah, also collaborating with Tenement on some underlying data structure stuff through their Cosmos SDK, which gets us a bit more development help on one of our core internals. Yeah, so I think that's it. Great update Silas, this is Dan. I didn't quite track your first point there in current plans for change stability. Could you maybe say in a little different words what you mean by being able to look from like the get-rebase illusion that you can go back and I guess go back in time, but that doesn't imply you could edit history though, right? No, so well, yes and no. So okay, there's two things to the same change stability. Ultimately, we want to have an actually stable chain that would run for 100 years. The next best thing when we're doing these successions of test networks is to be able to, by agreement with the rest of the network quorum, stop the chain, possibly when we have an incompatible upgrade, look at an issue that happened and possibly rewind state. So this isn't intended as a mechanism that is sort of recommended in principle for using borough or indeed any blockchain, but actually being able to coordinate and realize that there was a code issue and not have to throw away most of the history and most of the state is the next best thing to actually working. It can also be quite flexible kind of operationally if you want to do like take a previous test network, go back to a particular block height and do a fork with that state mechanism. So you could go in and like manually hack on the database, but we're trying to build a bit of tooling around this. It's not intended to be a feature of the chain that it overrides its own history. That was probably a bit misleading in the Git rebase, but that's more of an analogy that describes what is, I mean, a bit like Git rebase, performing a bit of surgery on the chains. But all of that stuff really helps us investigate issues that affect the chain stability. And also things like being able to like step through in a debug mode, multiple chains and what particular interaction caused them to disagree on the previous application hash or something like that. Does that clarify or concern? No, it sounds interesting. So it sounded like maybe two things. One is a bit of utility so that you can do some debug and the other maybe some sort of actual network fixing or at least in the case of the test net. That's right. Yes. I sorry, that should have been more clearly separate. Yeah, they are quite different things. One supports the other, but the global bucket of work is just working on chain stability. So we've got like a kind of chaos monkey type thing that I'm setting up now where we just continually redeploy all of the agreements network. And that should generate a load of stuff, I guess, not exactly fuzzing, but along those lines and just trying to make these issues happen before they happen in production and then obviously fixing them. But this is the chain history stuff is about having tools that you can actually work with once you've identified an issue. Great. And then something else that you said that caught my ear was you were suggesting that if we had some some common infrastructure that that would be helpful. Say just a little bit more about that. I think I was talking to Casey a little bit about this on the on the testing infrastructure. So I think, for example, like the cloud native computing foundation federation foundation that they have some shared infrastructure for this. So I think the advantage to having something like this, my point of view would be just reducing the service area stuff that we have to operate and increasingly visibility of where projects are in testing. So if you think about, so I mean, we're going to still be operating, for example, Kubernetes clusters for the agreements network and we do burrow testing there, but it's not the most appropriate place to surface stuff that's specifically about burrow. If we had a a Kubernetes cluster that had elastic set up that had Kibana, some other introspection tools, then we could deploy there. People could come in. I could potentially be open or they could be some kind of Linux foundation based access. They could, you know, look at the standard deviation of our block times on a particular version. They could look at various test nets that were exhibiting bugs or or not. So there'd be observability from the outside. And then for the project, it would just mean that, you know, I spent about a week learning about elastic search sharding, which, you know, wasn't completely wasted. But it also meant I wasn't doing other stuff. So, yeah, I don't know what the level of gen generality we might hope to have. But if there was someone who was maybe working full or part time on on improving this stuff, then I'm sure for us anyway, we'd end up with a better testing infrastructure than we have, and it would be more visible. Yeah, I think we see that from from the project that I contribute most heavily on and we have over there a something we call long running networks for for checking out integration testing for, you know, for periods beyond what you would normally do for CI. So things were like, I say a week. And yeah, there's there's a good bit of infrastructure there for spinning up the cloud nodes and then providing some dashboards that show all sorts of stats about the health of the network. So I do see some commonality I know that there's a lot of the stuff that we did would be specific for for for our project. I don't know if a lot of the management and software would be the same, but maybe sort of the infrastructure as a service part would be common. Maybe people could be iterating on these dashboards and repurposing them for for their usage as well. Yeah, I think so this exactly what I meant is the stuff that you do is not is a bit too long to run on CI, but you want to run at some sort of cadence maybe you want to run it. You explicitly pushing through it on a particular branch for longer term integration and so on. So yeah, I think the infrastructure service stuff could a lot of that you know this the setting up a control pipe in Kubernetes this, you know, making sure your helm tiller is like that's what we use is there. I think where it gets quite interesting potentially is if there is a whether there's an opportunity to actually abstract some generic generality around some of this stuff. One place might be around network formation. So I would think that generally speaking, all of the all of the projects will have some form of Genesis that gets formed and then we have to do some key distribution or something like that. And probably on top of that, we also change the validator set over time bond on new validators. And for us that would involve somehow securely communicating node IDs, which are an address of a particular key and validator ID or node. So there is this kind of like public key infrastructure type gossip process where I could almost I could kind of imagine you might be able to start up a single Genesis validator and then have some generic Kubernetes tooling wire everything up by being the trusted broker at the beginning of that process to communicate public keys around. The payload of that would depend on the project. But that could be a potentially interesting, you know, if there was a net if there was a hyper ledger network boot thing that could potentially be an interesting top level project but for this to be useful you know that they could be their own islands but just have the chair infrastructure. Thanks. And then the last thing that stuck out to me is your your call for more participation and contributors and maintainers. And I think for anybody who's who's listening on this call you're probably also the sort of people that are monitoring the TSE mailing list. And hopefully we've got a wide sweep on that but it seems like we would have an intersection of the things that we've seen on the TSE list over the last few days about the the diversity on the TSE, along with this opportunity to get involved in in one of the you know, I was going to say one of the more exciting projects but that's that's probably not the best way to put it so borough is a pretty unique project in in straddling some large communities between Ethereum and hyper ledger and being able to make make it the and there instead of the or so I think if there's if there's folks listening out there that are looking for a way to get engaged. It sounds like there's a really good opportunity to help fill out more of the contributions for borough. And for those that are interested that would you know start to give you increasing technical depth and engagement with the community that you know a year from now would put you in a position to help represent those views on this committee. Yeah, I mean that would be great. I think you know I carry some guilt for for this in the in the sense of I think when I've been busy coding you know I I'll talk to and you contribute to try and get them set up. And if they don't hear from me for a while and because I mean cases in there as well but there's a relatively limited number and particularly if the codes will move and they get disillusioned and don't want to do it. Then with these developers who end up forking I don't know to be honest they started off going in that direction to start start with but they made a lot of kind of incidental changes and I kind of wonder whether I should have just let them let them merge some of that stuff maybe they would have stuck around but yeah I'm definitely open to advice and any help in sort of making sure those people who latch on initially and might stay interested don't feel like they're being neglected basically. But so another thing we've done is like the issues and get how the massively cleaned up and we've got some like starter issue tagging stuff there as well. And and like I say with documentation the the borough binary divides into like six sub commands six main sub commands. Each of those could probably be relatively easily documented if anyone wants to contribute to documentation or or use contributing documentation as a way in. Yeah so I was actually going to ask you about the fork you mentioned in the in the report there I mean you partially answered this now I guess you don't really know why. And you know fork is an inherent part of open source development for better and for worse right it's very powerful it's an important piece at the same time there's a lot of wasted efforts because people fork sometimes for no good reasons and it's a constant struggle to try to keep everybody you know working together. And so I know that you know we have this challenge I've seen it on the fabric side for instance where people make contributions the issue of PR equivalent to CR. And and then if the maintainers don't make an effort to you know merge those quickly enough then people get turned off which is natural. And and you know so there is always the struggle about you know you you you're doing your own development and doesn't maintain or you also have the responsibility to pay attention to the contribution and not discourage people. I don't know if this is case you know fork also sometimes come from people saying well we have a very different point of view and the direction the project should take and so we're going to make our own changes. And then they don't want to bother you know trying to merge and so it doesn't I don't know in this case if there's any of that going on. Yeah well let me quite like to share some of that so I mean I didn't want to like name them on the update just you know because that's a public public record I mean I guess this is as well but. They're galactic slash galactic is the fork so I think they were there were a couple of things that led to the fork. So on their side we had a quite a good relationship with their CTR and we sent a few emails back and forth now. They were under a lot of pressure to get things out from their investors and they basically didn't feel like they had time to get stuff merged. Now partly that was legitimate and partly that was because for example they started with a copy and paste rather than a gift for which even if you were going to fork. You know why you do so they chucked away the history initially so then they got back onto our main line. But then I think because it started with a like copy and paste mentality the developer who was actually fairly new to go as well had had gone and changed a lot of stuff that was either his taste. Or he didn't understand the reason it was that way already that really wasn't related to the main thrust which made it kind of hard to merge his other stuff which I was quite keen to to get in in a module or something like that. But he changed all of this other collateral and so we went back and forth from that and then a load of code changed and then he found it frustrating to have to rebase that. So unfortunately I wish they'd come to us a bit earlier but it was a bit like they hadn't they'd come quite late late in the process and they'd already you know they only had they had a foot half in the fork already. Having said that I kind of you know wish that maybe I had have just like accepted some of the changes we could have always undone them but you know over time if they were negative but so yeah they felt a lot of pressure from investors. I think as they are now they've copy and paste fought a lot of stuff when they could easily be depending on a celebrity and now they're missing a load of quite important security updates a lot of bug fixes. Everyone loses but we're still in contact. I don't know if they get their their funding. I think maybe they might be interested in reintegrating it but like they've kind of gone even further on a lot of stuff. A lot of trivial stuff they didn't need to change now. So it's kind of like they're they're even more entrenched probably. All right. Thanks. All right. Good discussion. Good questions. Any any other questions or comments for Silas. Thank you Silas really appreciate it. All right. Final topic for the day about what I know you wanted to talk quickly about technical working group of China and the chair there. Okay sure. Let me let me give a quick summary. So the WGC the technical working group in China it currently was co-chaired by three persons. However due to the job changing one person just retired and there's also another one who haven't shown in the meeting for quite a while. So per the TSE suggestion in last meeting we run a routine process to introduce several new members into the committee and there are overall six candidates. Who are waiting to join the committee and per the routine without the top two are Jianan Guo who is from IBM in Beijing and the second person is Zheng Hua Zhao who also from IBM but her his location is at Shenzhen. So we plan to accept that these two persons into the committee. Question for you. So I know in some other working groups that we had having multiple chairs has led to some issues kind of I guess similar to what you were you were seeing right with people not showing up for or that sort of thing. Just wondering like the rest of our working groups now I believe all have a single chair. Is there a reason that the technical working group China wants to go with multiple chairs. Yeah actually when the group was founded the three co-chairs were nominated and by brain and after running for a while we think this way is quite suitable for the technical working group China. That is because the TWDC it's a very it's a special technical group other compare with other groups because it's a bridge between the local committee and the global community and also it runs works in terms of documentation technical contribution and events and lots of works. So there are many jobs to run. So we guess to maintain a committee that we think several co-chairs might be more efficient. Okay so the chairs are then kind of running sub subgroups within the technical working group China. Yeah we hopefully there are around like a three person to serve as co-chair. Any more questions. So for the sake of this call is this just an update to the TSC. Are you looking for some level of approval. I'm not sure whether that is a rule but certainly we will welcome for any suggestion or question. Are there any questions or any objections from from the TSC. I think you had fairly fairly recently given an update so I know I don't have any questions based on already having recently been updated and don't see any reason to get in the way at all of the decision that the working group is made on leadership there. May I suggest it's Marta that there is a quick round of at least acceptance from the TSC because that way we are allowing all the groups together. Otherwise we are expecting other groups to have a vote from the TSC and this one will be special. Yeah. So from the TSC members on the call. All in favor to for Bau's proposal please say aye. Any opposed. Any abstaining. Alright so that's good to move forward. Thanks Baha. Thanks. And thanks Marko. Alright that brings us to the end of the agenda. Happy to give everyone 15 minutes back but if there are questions, comments, thoughts let's work through those in the last 15 minutes as well. Alright, hearing none, we'll wrap up a little bit early and have a good Wednesday everyone. Thank you everyone. Bye.