 Now we're recording. All right, thank you. All right, welcome everybody to today's technical steering committee meeting. Everybody is welcome to participate in this meeting so long as you abide by the anti trust policy and our community code of conduct. Few announcements as we kick off here. The Brazil boot camp is growing ever closer. I don't know if Karen is on to speak or maybe I see Daniela is on maybe. I can actually go with this one. Um, yes, so the main thing that we needed before I was willing to pull the trigger on the Brazil boot camp was having enough session leaders, and we now have enough session leaders now my big call to the TSC and all of our project project maintainers is to help us make sure that they're all on board for your project. So if you'd like to see a project represented at the Brazil boot camp. Let me know I will be contacting a bunch of y'all individually tomorrow, but this is about making sure that we have they're getting started materials and things of that nature to get people up to so that they're able to get started and start contributing to your project. So that's what we're working on right now. And that's, that's kind of a train the trainers call you're looking for. Yeah, kind of a little bit more mentorship and one of the cool things that you'll get out of it is it's probably pretty likely your stuff will get translated into Portuguese. So they're expecting about 95% of the attendees to be from Brazil. And so they're going to be, it's going to be more in Portuguese and English, and they're going to be translating the materials as well as getting people started on it. Okay, thank you. So everybody heard that so please reach out for your respective project maintainers or contributors to get involved there. All right, cloud interop certification task force. That's awesome me. We're just now starting our journey for doing cloud interoperability certification, and I wanted to issue out a call for volunteers who want to participate in that process. So the very first portion of this we are going to be doing, you know, some of the more boring things like governance and just plain old project management. But eventually we need to make sure that we check in with all of our projects about what the certification process itself will look like. So I know that some of y'all are highly opinionated on this for good reasons, and I think that we should start talking about that now. And I'll be posting all of this to a portion on the wiki. It's going to be public, but not linked to on the menu at the beginning, simply because of the fact that it's all logistical aspects at the beginning. So once it gets a little bit more ironed out, and we don't have to worry about expectation management, then I'll make it on the menu. But at the beginning, I'm just going to have it. It's going to be there. It's going to be public. It's just, there's going to be a lot of blank pages. Okay, I don't recall is this been discussed in earlier TFC meetings or is this as of today. This is one of my goals for the year. So is to we're going to be working on a surface provider certification as well as a cloud interoperability certification. These things are the certifications and are typically run internally simply because there's a huge logistical uplift for a lot of staff to do. I just wanted to bring it all out as early as possible to the TSC that we're doing it so that people can have buy in and contribute to it earlier rather than later. Okay, so yeah, and as you get those materials together maybe just having at least a few lines describing what this is somewhere would be. So what we'll do is with like with all the task force will be doing regular updates to the TSC and recruiting from it. But you know the main difference between work groups and the task force is that something it's basically deliverables from my team. And so there are deliverable things that we have to, you know, keep moving and keep attached to. So it's a little bit different and that it's not as open and self governing. So that's one of the things that we have to accomplish. So we'll be doing that the entire time. Okay. So, well, I can appreciate that, but shouldn't you know the direction come from the TSC the direction in regards to even to do these things right. I mean, I don't have a problem having a discussion about what the right thing to do would be but it seems like you've already decided. So what do you do in regards to what aspect. I don't know what interrupt certification means to be honest. I mean, I can imagine but I don't know what it means. Well, one of the things that will be happening is for each of the projects themselves, they will be determining what their actual test nets are and what those certifications will be. So we have to put together the logistical aspects like the legal contracts, the figuring out where things are going to go all those different kind of aspects, which normally the TSC and the individuals do not do. Okay, so why don't we do this. Can you just please put together what the what the proposal is for this general work area. And then we can have a more structured discussion and have it through the TSC. Let's see if we can be clear whether or not this is an initiative that the TSC is supportive of. Sure. I'm confused by that, but I will talk with Brian about it. Great. So the project lifecycle committee. Our know has kicked off some some males about this which we've linked to in the agenda there are no is is there anything that you'd like to call out at this point. No, not really. I mean, if you follow the link you'll see I basically just send an email. I assume people have seen this on the TSC list, asking for people who are interested to be part of this smaller group, just to be clear. I mean, you know, to everybody, the intent is not to solve all the issues is to just, you know, kind of pre true issues. And so my next step worry. And so I think we'll really have quite a significant number more than I expected. But that's fine. My intent is to next start listing all the issues that I think we have to tackle. And I have actually you asked me last time how long do you think it will take. And I'm thinking, you know, I want to try to prioritize the issue list all the issues we think we have prioritize them and hopefully make progress incrementally, you know, not try to come back to the TSC with everything. Okay, we've been working for three months. This is the result. But instead, try to have a more piecemeal approach, where we can tackle issues independently as much as possible. And so last time, for instance, we talked about the project. So project I say, Well, I don't really want to, to cover it to resolve or trying to discuss this. But now I think, you know, I thought afterwards, well, if we there's no harm in having that on the list of issues if we just don't get entangled in this to the detriment of, you know, addressing any other issues that are probably easier to address. And so that's kind of my thinking now. Yeah, thanks for that night. I think that a smaller group will be a faster moving group. So I think that just so I know that the right put together the wiki page for it. Should we just start using that to start listing some of the issues that we should be talking about. Yes, absolutely. That's what I'm going to start to add some things. Yeah, feel free to do that. That's exactly what I want to offer to everybody to do. So I think that's good. And I will certainly appreciate at least pointers to how these things are resolved in other organizations like Apache or others. Cool. Sorry, where's this page on the wiki. I mean, we have a list. And for now, I just missed the people who have volunteered. But so we can expect right. Part right posted a link in that email thread. Okay, sorry. Yeah, I'm not at work right now. I don't have access to my work email, but okay. It's in the email thread. Okay, thanks a bunch. Sorry for the question then. Okay. I'll add it to the TSC challenge of our chat. Thank you. All right. Next up here, we have, I think. The, the PSWG discussion. The update was up around this time last week, which I hope people have had a chance to take a look at and then Mark wants to lead a discussion about the future direction there. So I will hand things over to Mark. And maybe if we can first hit, if there's any open questions on the update as written and then move into the directional part after that. Sure. Does anybody have any questions on it? I'm not hearing anyone. So overall, after we got the paper out, we sort of waffled or, or wanted a bit trying to figure out what to do next. And I sort of want to, we haven't, we settled on one thing and then stack, we're asked to look at working with stack and so that took us in a different direction for a month or two. So now that we're, that fell through, we're back to, do we work on Providence document for supply chain? Or, you know, what is it that the TSC wants from us and that other projects want from us? There's a lot of cross projects or working groups or SIGs. And a lot of, you know, there's a lot of new SIGs since this group was first formed. And to me, it makes sense that we should probably be off working with subject matter experts. But in the meantime, participation on the calls has dropped down to two or three of us. I'm Hock and Huey from Calipers there and Fippin and I. Todd Little from Oracle is there sometimes and sometimes Arish. And some of the attrition is also Attila and Emre will join, but they're somewhat sporadic at this point. So it's a question of what can we do to re-energize the group, but part of also, you know, what do people want from us? We had the fast fabric people come and talk with us. And, you know, that seemed to energize the group a little bit. I think, you know, we're all engineers and we like actually solving problems, not just writing documents. So is there a way we, you know, people see us working more with group, you know, other projects or going off and just defining things going forward. So let's do that as a basis for a start of the discussion and let people hop in. You know, I think one other thing to consider is we have the sort of completion criteria. I can't remember how we phrase it in the working group charters. So we should also think about, you know, declaring success and maybe stepping back for a bit and then later reconstituting a similar group. That's certainly one option. I think that continuing with the working group as is might be more productive than shutting it down and trying to spin it back up later. But I think we should always keep in mind that concluding things is an option that we have. We've concluded the working groups in the past, like we've concluded the major white paper that was oversweeping over all of hyperledger. So, I don't know that this group is done. I mean, you know, one of the reasons that we established it in the first place was because I think there was a, I think, sort of an expectation that at some point, you know, people would want to have, you know, some sort of benchmarking. But what we've defined thus far is a set of metrics that you would measure, not how they are measured necessarily, right, because we don't actually have, I mean, I think, you know, taking a look at caliper other sort of tooling and actually defining a specific test that can yield consistent results is the kind of thing that I think they were, you know, potentially going to engage with stack on but we aren't going to do that apparently because those guys have sort of disappeared. I think, you know, I think it's still on, you know, the working group shoulders to sort of start to think about what does a benchmark look like or what, you know, if we're using caliper, for instance, to measure performance of a particular framework, right, are we able to sort of do consistent things so that you could define a benchmark that could be measured consistently across the various frameworks. So, I, you know, personally, I'd like to see a little bit closer, sort of look at, you know, where are we with caliper, where are we with caliper in terms of does it support all of the certainly the active frameworks and, you know, some of the others as well. And if not, you know, what can we recommend that be done about it. And I would also, I'd like to see us get to the point where we could actually to start defining some benchmarks that people could, you know, reasonably use to make decisions because I mean I don't know about you all but I get a lot of questions from clients and from analysts and so forth about performance and scalability capabilities of the various platforms and, you know, I think, you know, we should get out in front of it. As we finished the, the white paper back in the fall, remember there were there were a few things that we had as as potential next steps, and a lot of those were around faults, because one of the things that makes blockchain special is its fault resilience. And I think if you know there were a couple different directions that the working group thought might be useful to do there. One is is around defining metrics with respect to faults, and another would be working on workloads that that intentionally introduced faults. I think what's particularly valuable about that is that if, if an architecture is designed for the happy path, but any sort of noise in the system makes it grind you a halt. That's going to be something that's important for everybody to understand. And it's also supposed to be something that is unique to blockchain stacks, or at least better about blockchain stacks and other sorts of related technologies. So I think that at least gives two options. One is, is a little bit more of the paper side of defining metrics around those spaces. And then the second would be more the hands on work of introducing faults into some sort of workload. Hi, this is Wiccan. I do support the, you know, Chris's thoughts and yours. Dan, the thing is, in terms of the, in terms of the need for the working group, I think I've always heard that everybody is looking for some objective numbers, which means either it's run by an objective party, or set up in such a way that it can be measured objectively. So for that purpose, we said, I mean, since the load on the system depends on the, on the actual payloads going in and the pattern of use. And some of the more useful stuff has come out of, let's say the settlement post rate settlement work that Accenture and DTCC did for Korda and DAH, we said that we should choose a specific use case. One of the use cases we said would be useful was the provenance one, and we started on it a little bit, but we never really settled on it. The choice of the use case is not just to describe the use case, but also to then run it in Caliper or some such tool. The natural things, natural projects that we would be associating with would be supply chain based project like grid or something else like that. So there has to be some, to Mark's point, some subject matter experts that come to this group. So that is the first thing. The fault injection part, in fact, Imre and Gang are the fault gurus. And we also have a, I wrote something about chaos engineering and the blockchain on medium, which started off this investigation, but I want to make it very practical, because the Indie guys have created a chaos framework around it, around the, you know, the practice. So that is the second thing. But all of this requires engineers and resources with three people or four people showing up to the working group is not going to be enough. So people have to say, you know, how practical is it to mark on these projects without, you know, without there being full time resources on this project. Maybe one of the things would be for interns or somebody to do this from, you know, a company who's interested in a benchmark. The last thing is without a testnet that can be spun up. I know Chris that you think that that testnet should be under some, you know, should be spun up by companies who are interested in these benchmarks, but also we kind of talked about it quite a bit here in terms of the CNCF testnet. I know that, you know, things that didn't go well or whatever. And we don't have such such a rig that can be easily spun up to do this running of the caliper itself. And even, of course, in an advanced stage to do chaos on that testnet, starting with chaos on testnet, then of course, enlarging that to go to production type systems. Anyway, I've spent enough here. So I think focusing on the lack of participation is the wrong thing to do because I mean, for a working group to be successful, you need to clearly define problems to solve. And then you need the right people to participate. But you can't have this ladder unless you have the former. So I think what's happening is, you know, you started with the working group, you figure out a problem to tackle quickly, and there were enough people thinking, hey, this is an interesting problem, I want to participate in solving it. And you issued the paper, you were very successful in doing that in actually fairly timely fashion. And now you find yourself with that really clearly defined problem to solve and so people lose interest, which is not surprising. So I think it's really a matter of whether there is a problem that you can define. I think, you know, I think there is no shame in saying, hey, we've done what we could. Nobody has another problem to solve for us and we should just shut down. I think it's better than trying to solve a problem to solve just for the sake of it. But, you know, in line with what Chris was touching on, I don't know that there is nothing else the group could do. And so I agree with them that, you know, it's a bit of a challenge at this point, but you would have to see, okay, with the basis you've already created with the first report, what is the next step you could go, you know, that would still be platform independent and would still provide some guidelines at least as to what it would take for somebody to set up a framework test bed. Hi, everyone. It's Alex from Saramitsu. I might have one proposal. I don't know if that could be interesting, but maybe since we have a white paper and framework, all the platforms that are under the hyperledger could try to perform the measurement and then submit the reports together with the data to the working group and the working group can review that and say whether the measurement was done in a way that is acceptable and then say, yes, you know, this is the result for a particular platform. I know that there are a lot of open questions, but by having maybe a very ambitious goal, we will also find ourselves discussing those things that are maybe not well defined at the moment as we were discussing the, you know, how to define the load for a particular measurement and, you know, all other things, but once we will try to do this and we are doing this for hyperledger at the moment and see the reports then the working group can will have a material to reflect on the white paper to provide feedback and then end of the day also improve the performance of the particular platforms that are under the hyperledger. This is Silas. My impression looking at Calipa, so we added a fairly simple borrow workload generation to Calipa and run that is that it's kind of fine, but the workloads being unconstrained, it ends up being pretty arbitrary. So you end up finding a simple workload that might perform well and that doesn't really constrain things. So the basic metrics of the read latency and read throughput and stuff kind of make sense. Without there being a particular problem, a particular type of workload happening, it's not very meaningful. If you look at something like the computer languages benchmarking game or something like that, that has hundreds of different problems and then you get people competing who particularly like a language to improve their implementation. So I think we're some distance off that but I think it would be useful to if there was a someone defined a standardized workload that has maybe certain things that make it hard on some platforms like some ordering requirements so you can't can't get away with MVCC in some cases or whatever different properties if we can negotiate that and then leave it down to the frameworks to implement their best version of that on a pick size network. That would actually be comparing apples with apples and probably start to build a more realistic picture. I don't find the idea of just the metrics on their own that don't paint a very clear picture really. Right and what we were trying to do and have been working towards is as you start getting into these different verticals. You know, so for supply chain, there's provenance which can apply across a bunch but take the metrics up a level so you know we think these metrics are important for supply chain and provenance in particular, but so then you're starting to define some different use cases and workloads a level up from the original paper and we had, you know, figured that would be the next logical step it's just trying to maintain interest and getting that done is Use cases would be quite hard to achieve consistently. I was thinking more something that that is more like an algorithmic style problem. I mean, maybe it can be synthetic. I think at this level, you know, like with the computer benchmarking game. I'm not sure sort of the that there'll be particular problems that are hard on a particular framework that aren't particularly use case related. I wouldn't thought and I think they should be like fairly easy to implement as well. But just straight sending of transactions that say, you know, don't have to like share state once they get executed is kind of too easy. Having a consistent like use case in a supply chain seems way too high level for I was thinking. I was thinking more like a kind of distributed algorithm at level. I think I want to agree that both performance and the scalability is really important to have a ledger and even to all the open source projects. So in the community we say it's a still hot topic and people talking about it a lot. So if if it's still important but people do not want to join the activity I guess there must be something wrong. I think we should not simply shut down the working group, but rather we should continue the work and we try to find how to attract more persons to join the activities. I think we can send the email to the mail list to ask the people what they want to see in terms of performance and scalability. I did see there are some discussions related to the performance numbers or technical reports. Maybe we can consider extended at the scope and for next step we give some numbers or doing some real evaluation results. So I'm wondering and apologies I don't recall. I think it was maybe who had suggested, you know, maybe we start with, you know, team submitting the projects I should say submitting analysis that they may have done leveraging caliber or something else. To the working group for evaluation and potentially publishing as informational right, non, you know, not as, you know, sort of definitive but as informational. Initially, again, I do think that we need to get to a point where we have some sort of a benchmark that would be my, you know, in lieu of that, being able to have something that, you know, maybe we're just using the same tool chain to do that testing and, you know, with similar kinds of configurations we can start down that path. And I think to Bella's point, you know, it's likely that, you know, if we have something that's actually delivering some meaning something that's meaningful to the various projects that we may get more involvement from them. And then, of course, we always have the problem of, you know, the vipin is trying to do by, you know, sort of brute forcing, you know, the situation with the identity working group and having, you know, doing the Ironman to two meetings a day kind of a thing, versus, I don't know, maybe just moving it around a little bit, but I know personally, I'd like to be involved, I'm actually leading the performance here for an IBM for the work we do on fabric and IBM blockchain platform, but I can't attend that time I have a conflict. So can I just ask it again, moving up a level for a minute, because I think that the concerns that Mark is expressing here are, are ones that apply to all of the working groups. I know the numbers that we see in the architecture working group have gone down. Vipin struggling with identity performance, some of these others. Is it time to step back for just a second and the model that we have right now worked well at the beginning when we were in a very formulation kind of stage. But the emphasis is clearly moved from the work in the working groups into the projects themselves. I know it's, it's a little frustrating to work at times and I don't, I didn't see Rahman this morning, but I know he would express this as well. It's a little frustrating to be in a role where we feel like we're doing nothing more than describing what's already being done in the projects, rather than having some ability to do some influence. The cadence of the groups being every two weeks means that it's much easier to like just not do anything and show up and work for an hour, which means that the pace at which things are being done is insanely slow. Is it time to kind of step back for a second and figure out exactly in general, not just for performance, but in general, what do we expect from the working groups? Do we expect them to be documentation engines that are describing what's being done? Do we expect them to actually be formulating new concepts and ideas? Are they capable of formulating new concepts and ideas better than, for example, the academic researchers who are pushing things forward or better than labs who are doing the experimentations on it? What is the right role for working groups, given the maturity of hyperledger? Well, the other thing I would add to that, Mick, is, you know, there seems to be a huge growth in special interest groups. So maybe the working groups just become special interest groups. I don't know. But they seem to be thriving. The topic specific or application or domain specific kinds of things. Yeah, I mean, there seems to be an explosion of, you know, and I'm not complaining, there's an explosion of special interest groups. So maybe, you know, instead of this being a working group, it becomes a special interest group. So, hey, this is Brian. So the point of the working groups as, you know, I've understood them when I kind of landed in hyperledger and they already kind of existed was to try to get a degree of cohesion across the projects across the actual development of code, right? So that you could talk about performance in a way that was not rooted, you know, entirely in how one framework chose to implement performance, but was actually a conversation across different projects. And it's always been the hope that working groups created output of some sort, you know, white papers as architecture has. There was even a white people working group once. That sort of thing. But, you know, these things don't have to be ever green. They can be sunsetted. That's fine. But if somebody from the outside asked, you know, does the hyperledger community care about, you know, performance or care about a common approach to identity or architecture? You know, the usual answer is, well, you know, this would be a place to come and have those cross cutting conversations. And, you know, only anything in an open source project only works to the degree that there's interest and motivation on the participants to do it. I would hope that on each project, there'd be people interested in these cross cutting issues. Because I think being able to tell the world that, you know, these are just more than a collection of siloed kind of, you know, self starter initiatives. There's actually a, you know, somewhat of a cohesion to the hyperledger community that this is one of the ways it manifests. And to Mark's concern, this is something very distinct from the special interest groups, at least as they've evolved and we've been managing them. The SIGs are intended to the place where we try to match up the technologists working on the projects with domain experts in a field like healthcare or supply chain configuration or trade finance. And, you know, have, again, it can be cross project, it can be abstracted away from the project so that we should definitely be talking about the projects and what they're doing in this case. And then again, the output should be some sort of content, whether it's blog posts or maybe white papers or maybe just sharing of use cases and that sort of thing that might incidentally drive some requirements or drive interest in contributions back to the projects or even new projects. But there's the difference between the SIGs and the working groups. And my take is if there is momentum lost in the working groups, there's like, like Mick said, no shame in wrapping them up. But we should just recognize that we do need some forms for this cross project communication and coordination from time to time. And we shouldn't just do away with the concept of working groups. Okay, and maybe it's a failure on my part leading the group to, you know, specifically reach out to all the different projects that would be involved. You know, we've had a close working relationship with Caliper, given the nature of our work and the formation of the acceptance of the Caliper project. Yeah, actually, if you, I mean, Caliper, we actually held off on accepting it until we had created the performance working group because there was a desire to create, you know, kind of an abstract definition for what performance meant in a DLT context across things as different as fabric and sawtooth, you know, before kind of authorizing one particular benchmarking tool, which might have been rooted more in one framework than the other. So anyways, and then Mark, it's not a failure on your part or anyone's part. You know, we kind of, you know, we have to work with the interests and the people who show up in the community and what they want to accomplish. I think just making sure we get the word out, which you have been doing and by having this conversation here, this is exactly the right thing. So it's really a question back to the other 35 people on this call, you know, is talking about and having a place for performance across hyperledger across the different hyperledger projects. Important, interesting, useful or not. And if it's not, then we'll just wrap it up and find other ways to express or have those conversations. Attila, I see your hand up. Yeah, two things really picked my interest in this discussion. One is that probably every team for their own platform has some performance testing done. I know this for the bureau or also I don't have a discuss this now. So I think the performance working group would be a nice crossroad to collect these private performance tests and maybe see if some common patterns start to emerge. And it might be a stepping stone towards I don't know cross platform batch parking. I know that this would help a lot with calliper shaping its past. So we try to be platform independent on the world level. But for this we need to see how each project is performing its performance testing. The other thing that's really interesting, I think is the fourth injection and workload kind of things. So everything is kind of boring when everything is good. The interesting things happen when things start to go sideways. So for example, recovery management start to kick in crash for tolerance, Byzantine for tolerance, etc. So this could be another common view among the project to, I don't know, create some scenarios how to try to intentionally crash the systems and see how they react. Okay, so we've got two things that have emerged here one is to Mark's original question. And I guess the thing that that has resounded to me is is getting focus in the next objective and I think maybe the fastest way to do that is Mark for you just take the lead on defining that maybe bringing that back to us here as an update to the working group charter, and then we can rally support around that particular objective and see if we can drive help drive participation then towards that that new end. And then the second issue that has emerged that that Mick is raising is the overall work group efficacy. And I think maybe that probably bears some more offline consideration and discussion before we, we have some sort of action there. Okay, and I guess the other ask I would have is for the different projects. Dan participated and Chris has participated in the PSWG but what do projects want from the performance and scale working group because I think each of the projects sort of has their own performance work going on that's not necessarily in concert with what the PSWG is doing. So, you know how, how can the PSWG become involved in different projects, I guess is one way to look at it and help out. So, maybe the project leads could think about that. Yeah, definitely I think Mark has also done well in reaching out on the mail list on multiple occasions looking for input. So, I think there's, you know, you can only react to the responses that he gets and if we don't get responses there then maybe that's a sign that those aren't productive avenues to go down. All right, the, the last couple things that we have probably require some pre reads. And so I think we'll look for the taxonomy and chairs and vice chairs discussions to get introduced on the mail list first, and then we can, we can discuss those at the next meeting. Sure. I think that that brings us to a close for our agenda this week, and it looks like we'll continue to have some offline discussion on the on marks topic, maybe the working groups in general. And so I'd like people to participate in that on on chat some of that's going on in the TSC channel right now, or on the mail in list the mailing list is of course preferential for mailing list is more preferred for for this type of discussion. So we can capture that and people that aren't available in real time can still participate effectively. And with that, I think we can move this meeting to a close. Thanks Dan. Thanks everyone. Thanks everybody. Thank you guys. Bye.