 All right. We are at five after now. So let's go ahead and get started. I know Chris Ferriss is out So I'll help just facilitate the call for this this week I think let's start with the TSC election results just because that informs the TSC for today's call So I sent an email out shortly ago Really excited to see the new TSC. You know, please join me in welcoming the new 11 folks You can see it on the agenda slide here But those new and returning TSC members are are no Alhaw, Bin, Chris, Dan, Greg, Hart, Jonathan, Kelly, Mick, and Nathan From that we are going to send out an email to them today to begin the nomination process for the chair position That will be open for a period of one week at which point then the election will kick off We'll collect votes on that for one week. So two weeks from today. We will announce the chair for the TSC In terms of checking for quorum, I think quite a few people are on summer holidays this week So I don't suspect we're gonna reach quorum to be able to vote on anything today But from an attendance standpoint of the new 11 TSC members so far I see are no Greg, Hart, and Jonathan Have any of the other others joined so Bauha, Bin, Chris, Dan, Kelly, Mick, or Nathan This is Nathan. I'm on the call. Hey, Nathan. Welcome Yes, I'm on the call too. Hi guys. Hey Jonathan. I got you. All right. So we are at 5 of 11 quite a bit shy from quorum That's fine. We can still move forward. Just no votes for today We can move a lot of the discussion to email and if we need to vote on something that can be an email vote Or we can push that to the following week So moving forward from there the next agenda item hackfest planning So the Chicago hackfest September 21st and 22nd in Chicago if you're planning to attend Please register as soon as possible Really excited to see that we have the draft agenda already seen quite a few topics in there Anything else you want to see get covered discussed or something you may want to present on please drop it in there We will run this in on conference format like all the other Hackfest and then in terms of Europe, we are still trending towards late October We're evaluating several locations. We hope to have that finalized in the next week or two So we can get the registration site up for that and people can begin booking travel for that Any question? Yes, sir The October one so the latest dates I saw where On the weekend at the end of October. Is that still your favorite option or the? Good question. It's unclear at this point. We had a solid option that was going to work at that time There were some things that prevented us from moving forward with that So now we're a bit back to to the drawing board and evaluating some different options, which does impact the dates So no, it's not certain that it would be that weekend But as soon as we have some more details or better clarity, we'll come back to this group with what that looks like but no No, okay I'm concerned. This is good news because I Would rather not have the meeting on weekend, but I will oblige if needed what I didn't quite understand quite frankly And I thought you might clarify maybe I missed it before but we had a we had a doodle pole and then we had some dates and then Somehow we're ignoring those dates. What happened to that? Yeah, so this yeah, good questions So this came up on the TSC call a couple weeks ago. There was the doodle pole There were two preferred weeks that were towards the end of October The discussion from that was essentially on one side of those dates was the member summit in Singapore on the other side I believe was money 2020 or some other industry events And the discussion there Eventually resulted in landing in the middle Which was that weekend I believe and the other consideration was just around the Halloween holiday which for people with families and children really wanted to avoid So that's why we had honed in on the weekend But at this point that venue option isn't going to work. So we're gonna have to reevaluate a few things and You know until we have some clarity on options. There's you know Not much we can That's all right. I understand obviously I'm sure you know that the sooner we said all these dates the better because I have a whole bunch of other things In the mix and I keep postponing to answer definitely based on that. So Absolutely completely understand and we'll we'll try to get this sorted out as soon as possible Thank you All right, any other questions on those topics before we move on. All right, sounds good So mark, I think this is assuming you're on the call. This is back to you a couple weeks ago You had raised the charter for the performance of scalability working group. There were some comments from the TSC which got factored in So I think at this point you factor those in I believe two weeks ago on the call We didn't have quorum then and also you wanted to bring this back to the performance and scale Working group just to get their approval Is there anything? Updates you want to add at this point from there. I know we won't be able to vote as we're not at quorum But any discussion to close out and we can either move to an email vote or postpone a vote to a subsequent TSC fall So at this point I've missed the last two performance and scale Working group calls because of personal issues But the main stick up with the With the working group is they want more clarification on rough consensus. I've tried to stick with What the other working groups have done? So I need to go and have a talk with a working group and Get that resolved. So technically we haven't voted on it yet People want more clear more clarification on rough consensus. If there's any guidelines or suggestions people want to make You know open to that Okay, and I think our no and maybe I think it was our no a rom had sent on some stuff I believe that the W3C uses or potentially another group Yes This is Ram. I think Guidelines I think Yes, from send the ATF stuff and I send the dirty C1 right and so the question would be do we You know how complicated do we want the charter to be I? Don't think the idea was to for you to to really encompass any of this or include any of this in the charter Was just to say, you know decisions are made by consensus and then you vote You know, it's the chair as responsibility to figure out when that doesn't work and the simple vote needs to be Used but and you know, I don't think the charter needs to get into any of these details Right. I didn't send us, you know recommendations or guidelines. I didn't I agree with you There are some members on the call that felt they wanted more specifics. So I will get that resolved following your guidelines All right And as an example in the architect of a group We just mentioned that decisions would be made by rough consensus without detailing exactly what rough consensus means And you know, that's worked out fine in our group. Yeah, and I think I cut and pasted that So that should be fine. All right. So mark it sounds like You'll raise this in the next PSWG Meeting and then we can bring that back on to the TSE agenda at that point Yes All right Sounds good. Thank you for driving this along All right. Any any questions comments there before we move on to the final topic for today? All right, sounds good So checking quick is a how'd you know Joe on the call? Yeah, I'm here. All right. So how'd you know? submitted a Proposal the hyper ledger caliper Project that hit the mailing list a couple days ago So I think for the call today How'd you in if you want to do a overview and What not for what you're proposing I know ideally will move a lot of the discussion on to the mailing list just to be really inclusive of everyone across time zones But let me give you some time to walk through your proposal with the group Okay, yes, I can introduce that So sure I share my screen. Yeah, that works. I've just given you Presenter capability. There you go. Okay. Can everybody see my screen? We can see it Okay, okay, good and Okay, so this is how to enjoy from Huawei and I'm very happy to introduce the project a catapult at this meeting so basically a catapult is a blockchain benchmark framework which is designed to integrate with multiple blockchain systems and Try to help people to evaluate to the performance of those the blockchain system Because we all know the performance is one of the key concerns for customers However, currently we think there is not widely accepted the test framework for blockchain so There are some test tools for for different blockchain system For example for fabric, they are PTE For fabric, but it only support a hybrid of fabric and we also found there is a Project named the block bench, which is developed by a National University of Singapore and to a John University with the block bench provides a benchmark for framework for multiple blockchain platform including fabric 0.6 his room and parity We think this is a good framework for reference, but it does not support a fabric one zero and the other Habitual blockchain platform like a sort of and neither does it provide a Plugable capability for sporting multiple test cases so Why is why we think We need such a framework we think there are three main reasons the first one is that Some blockchain solution may claim some performance without But how the last Achieve the ways usually on this code so it is hard to perform the evaluation and impossible to perform the same evaluation on different blockchain projects and the second reason is that there is the not Common definition of performance indicators So it is it is impossible to compare two performance routes if their meanings are not the same By the way PSWG is doing this work now and that's that's a very great and The third reason is that we think there is no common accepted benchmark use cases So blockchain performance may be different in different scenarios so how to define the scenario that is very important for such a benchmark framework and So based on those above those reasons we think there are three goals for main goals for Campbell the first goal is to provide a benchmark framework that supports multiple blockchain systems the framework should be talkable to make it easy to integrate new blockchain systems Also a set of four blockchain and not spawned interfaces are provided to help directors to write a unified test for those multiple blockchain system and the third goal is that Kaliper should work together with the performance and Scalability working group and to make sure that the performance indicators that it defined in PSWG are correctly implemented and The third goal is that Kaliper are trying to provide some benchmark cases for typical Blockchain scenarios that will help people to understand blockchain performance on those scenarios Again the use cases we think should be discussed together in PSWG and we can implement it of use cases in Kaliper so what's the status of Kaliper and Kaliper is the internal project started in Huawei in May and now it is a open source and available on GitHub The reference is in this document. So if you're interested in it, you can find it here About Kaliper and Gen in general as a framework has three main layers from bottom to top and the first layer is Adapting adaptation layer, which is you to integrate special blockchain system into Kaliper system and Each care each adapter implements a different common blockchain network MPI is using cost banding blockchains or native SDK or just for API and We now support a half that the fabric one point zero and the source and the second layer is we call it the interface layer which provides some not spawning interfaces for upcoming For applications include some blockchain operating related interfaces resource monitoring related interfaces and some performance measurement related interfaces and the top layer is application layer so test the use cases I meant to the users there and Now we also talked to is some companies you could be twice have chain or call more next so I'm it also and Other companies are very interested in it and promised to contribute to it if it's the if the project is it's approved, okay, and now the first step of this Project is to provide a benchmark framework which is capable to run both fabric and the sort of like benchmarks and The next step we think we should work together with PSWG to define the performance indicators and use cases and implement those in paper and How to develop as a successful of the project we think is that the success of the project can be that if the project can Attracts many users with the auto-community to use it as a benchmark framework and So if you follow if other project to show up and the community is more interesting putting Most also read and we think it's okay. The project should be closed Okay, that's my and this is my introduction. Great. Thank you So a quick question Sorry, I can't hear you. Can you so You say so framework currently supports 506 and you're working on one zero. There's a theorem and the parity browser. Is that so you You talk about the bulk bench. Yes Mm-hmm. Yes. Yes. This is another project, which is the developer by some universities Okay So, I'm just what is already working in what is what needs to be done. I Didn't understand that So, can I can I run benchmark now of TPS on fabric one zero if I want to try? I Yes, yes, you can you can Okay Johnson, I think I can answer the question and for catapult it now supports fabric one point all and the current version of social sleep and You can go to the get up page download the project and the wrong benchmark We know how TPS and the latency is for data and under the meantime it can monitor the resource usage of the blockchain system where and the testing is running above it Okay, I'll play with it. So I just just very quickly What is the setup because if I change the setup in fabric and it's true for most blockchains, right? Then I will have different results right if I have different peers different orders Like how do how can I compare? I think by the way, they just say it's very useful to have such a tool Just hands down like no question. So first of all, thank you very much. Yes. I've got to understand how we can use it This is why I'm like, I'm so excited. That's fine Currently you can use Configuration file to define the backend network. So you can define how many peers and under the address of orders and Gipc address or the benchmark test the test cases can use this configuration file to intact with the backend blockchains system Understand, understood. Okay. So I can configure it in the tool and the tool will set up a demo setup and then we'll test it Is that correct? No, no, we don't support a set up your network Automatically Yeah, that's what I wanted to know. Thank you So I have a different question. This is all no I mean given that there are other projects including this blog bench Have you reached out to them? Would they be interested in joining your effort? and basically not not yet because we because we Looked at the look at the project and it's not updated. It seems like a half years so we developed a We did a benchmark a frame of ourselves. Yeah Yeah, and it is also from University So it's probably a student project and then after that after the semester is over probably gone but That's very possible, but you know, I think you this kind of efforts I hate to see you know competing efforts by accidents, you know And then so I think it would make sense to reach out to say hey We are interested in launching this effort within hyperledger. Is that something you would be interested in joining and You know get an answer and then if you don't get an answer, it's an answer by itself, right? But Right and the working group has discussed reaching out to the block bench people we we reviewed their things and didn't like some of their conclusions and Their test framework, but now that we're moving ahead to potential projects. We can take that up as well Question I would have are you folks gonna be caliper folks gonna be in Chicago Should we put something on the agenda to you know pull people together and go through this in more detail in Chicago? I'm not sure. Victor. Are you going to Chicago? No, I'm not In Piazza Luigi meeting two days ago I Have talked with Marta that I cannot attend Chicago's meeting and she told me we can Have next to Hexfest in Europe. We can discuss at that time and at that meeting I think Marta says we are good to go from just a few days perspective Just to clarify what I said is that from my personal perspective I think it's a good project and I think that you should be proposing it to technical steering committee I did not say that I say for the technical steering committee or on behalf of technical steering committee that you should That you are good to go But yes, I said that you that that there will be a Hexfest in Europe and that This is probably a better time since you can't go to Chicago All right, other other questions or comments from the TSC or technical steering committee at this point I think in the document the concern was raised about how do we make sure that the tests that are being used continue to be fair or Represent the different projects appropriately From what I'm hearing from the maintainers. They're taking a very even-handed approach about this But it's something that we'll just want to be aware of and careful about as we go forward With how we publish those benchmarks. I think Jonathan's comment reflected that the details of how things are configured are going to matter a great deal Especially for some of the use case specific blockchains like Indy Making a comparison that actually makes sense between, you know, like an identity ledger and a general-purpose smart contract ledger Are going to be difficult, but it looks like the tool itself Will support setting up a lot of different kinds of environments So I'm excited for the idea that we can have some common tooling to help us Make more apples to apples comparisons Because you bring an interesting important point. I mean the publication of benchmark results I don't know that this is on the table right now I would be very worried about doing anything like this It always raises liability issues, which I don't think any one of us want to get it I think they should solely be focusing on developing the tool for people to use for themselves Yeah, yeah What we want to do we want to want to want to publish the NA benchmark results we just to provide the tools Yeah, I think that's my interpretation of this document as well. I have not looked at the the code yet, but My interpretation is that this is a framework and each implementation of blockchain Technology would log in their specifics to drive The performance using this tool. So it's the same work that allows us to you know to plug in Stuff to drive, you know, they each of the implementations specifically There's differences and of course there are always differences between implementation so each plug-in would be able to take care of that and Especially when you know the working group the performance working group agreed on a set of metrics that we need to provide Measurement for Then that's good. Then you know with a tool like this We all can provide plug-in so that we can come up with with the measurements for those agreed upon metrics So I'm also important for this. We're not calling for a vote You know from holy at this point. I'm also port for this This is Greg. I would echo the same sentiment I'm on the phone, so I haven't had chance to see the slides or really review the doc Deeply, but I think it's a good idea in general, especially I mean even just within one project Yeah, I agree with everybody You know, this is a fantastic tool and I'm definitely in favor of it I I as I've pointed out, I think we need to be careful about how we publicize test results So we should probably have I mean this doesn't have to be in the proposal But I think you guys should think about how you might want to handle The situation if someone, you know claim something or Are things like that because even if the official group doesn't publish tests, you know What happens if you know Jonathan publishes some tests on his blockchain and then I publish some Test on my block chain and we get different results it could be it would be very confusing for the community and And I think it's something we should think about that's all Yeah, yes We all have the same reservations. I think right. We're all concerned about publishing material We can also generalize it into a how can we reproduce results, right? So if I run something on my laptop and I'm gonna also publish the spec of my laptop You don't know what I run on my laptop in addition The reason I was asking about the setup is maybe just a suggestion. I'm not it's not a call for action But a suggestion would be that maybe the tool will have a simulated vision So, okay, there are two parts right one part is it's always nice to have a monitoring tool kind of to examine an existing system So I just come in and I plug in my monitor and I just have a look it, you know CPU usages TPS whatever we have there, right? The two the three the two or three measurements that we have and not often is to say when we want to test a Setup we can pre-establish a setup with a configuration file and then we can run it in this way It's not it's not again It's not it's not bulletproof or watertight, but we can try to compare for example What is nice is to be able to re-run the same thing Let's say on a weekly build and take a look at the performance of fabric week after week and to do the same thing on Sotus and the same thing on on Monax or it it doesn't matter like on what platform But just to see some graph and to see are we improving or we requesting But the nice thing is we'll try to write on the same platform the same hardware and maybe that can be an indication We can still play with these numbers as well But just like to say like if we can be as neutral as possible, I think that that should be a goal, right? Just a suggestion Yeah, I think it's a good stretching and We should study how to automatically set up a blockchain network I think the system on the test could be Produceable is very important and currently we're trying to use our compose to build System and As in our plan in the future we're trying to integrate with cello and Ask a cello to provide a such system for benchmark and I think providing different kinds of system is a bad purpose of channel and About to the benchmark results as how June has mentioned before the Caliper is not intended to publish any benchmarking to Results it's only a tool provided for customers the only Advantage customer can get from the tool is that they can run the same test and on different blockchain systems and Having different results can be Also common because different blockchain systems can have different performance Depending on different use cases So there is set up use case topology. Definitely. I can't write different chain code and on different machines. It will be of course Definitely. Yes. Yes All right, I think the main thing is that If I run the test ten times in a row on the same setup, I need to get fairly consistent results. Yes Yes, I think this should also take on the consideration Maybe run multiple or times and have some average or a monitor curve a monitor curve about the performance changing It depends on how the matrix are defined, I think both SWG and the caliper projects should also Should have also working on that So I have a question. This might be stupid I'm not familiar with the details, but would it be conducive to automation like do a CI pipeline to do regression testing? Yeah, that's what I wrote on the chat, but I think so. I think it can be useful. I Think maybe it's not an Original purpose of the tool because the cover is It's designed for Blockchain systems and users It's not for developers. So I think Looking at the same step I'm flying More focused on identifying laws or bugs in code but But but anyway, if you guys think it's necessary we can take it also on the consideration as a future plan So I I wonder if in fact the goal shouldn't be revised and you know, it invites the kind of Problems we're talking about, you know people using these to publish results which you know with all the caveas that have been mentioned and Maybe it shouldn't be you know described as something for people to use to compare one framework against another Even though it's it's no use an interesting, you know a use case for it At least we should have you know proper Warnings You know Jumping to using this as the publishing because there's all these problems we know it's really meant for people to use Isn't this always been hasn't this always been true for benchmarking of anything and everything? I mean CPU benchmarking graphics cards benchmarking. I think It's always been a problem You can always gain benchmarks to make it look like your GPU is faster than your competitor I think you're right But that's why benchmarks often are standardized because they go through this process meaningful process of people agreeing on the set of benchmarks All right, they published the mattresses look at the refactor. Yeah, maybe we should try to do That's why I was asking the configuration is part of the setup so that we can rerun the configuration and try to reproduce and I say Look, I run it on four nodes Yeah, ten times this is the average I run it on six nodes This is the average and then people can actually think about the topology for example So that we I'm trying we're all by the way Victor We're all trying to think about how can we use the tool in a way that actually not risking You know, it will have more benefits than us embracing something that will backfire on us because that maybe that's a concern Right that it may be the TSE approved kind of the factor standard and then people say hey, we use your tool Look, this is this is the result. So we need to like really be careful I think everybody agrees on that one and I think They're trying to standardize the benchmarks is outside the scope of the project to build this tool I think if we focus on the value of this tool allowing the developers to Use racks whether they're whether they're improving or regressing is probably I mean that's reason enough to build this tool Yes, but I don't think we should get off into the weeds about trying to make standardized, you know Yeah The success is really to make sure that people are not misusing it and folding into all these problems versus talking about Yeah, I mean I get that I think just standard language of you know, there is no such thing as a standard benchmark for these Blockchains and this is really just a tool for developers to measure You know, which way they're going when they make changes Right How the benchmark is made is not necessary but Standardize how the matrix is defined is necessary. I know some Blockchain systems clans they may have millions of transaction per second, but the reason they have such high TPSC is that The DPD of TPS is not as usual as others. So this is misleading to customers, but I don't think it's Done only by Kaliper project I think guess WG can do the work better and Kaliper can help to define this matrix Well, and this part I think it should be standardized But how to make the benchmark? It is Kaliper's task. It is not a defacto standard but Following the definition that the SMP has already made Aqua is one. Yeah, I was gonna say when I proposed the Performance and Scale Working Group in DC. One of the things was people want a tool that will fairly measure across all Blockchain and distributed ledger Implementation so in theory we could throw go Ethereum in it and other things and Brian was clear He wanted something that was like a matrix To help people pick the right solution for their For their thing, you know fabric may not be the right solution for everything So I do flake might not be the right solution for everything and So I'm a little confused because I hear people saying we only want to use this tool for development to go figure stuff out But I thought part of the performance and scale working groups goal was to go off and define What an industry benchmark should be and work across the industry to get there? So I agree with you that you know when Dave said earlier we shouldn't standard as a benchmark I was thinking well, that's what the PSW is doing. No, that's what I think it's doing in a way But The tool itself. I think it's mostly, you know my concern is not I mean is mostly in the This idea of us the meaning the hyperledger publishing results. I don't know that we want to go in this Then there's the question of you know if people use it use the tool to do this kind of stuff We should have all the You know warnings necessary to ensure that people do that with all the right information So that indeed does Jonathan was saying at least people could reproduce the results right and that was one of you know my thoughts all along is You know having worked with spec and things like that that anything you do needs to be reproducible And you have to give all the information so people can take identical hardware identical software and reproduce your results So I think you know calipers popped up in the last couple weeks. I haven't really had a time To sit down because I've missed the last meetings And go over things with you know my thoughts on some of the things I think You know we're going to get there And I think it's a good start Yeah, I think that I mean part of the issue is with reproducibility Is that a lot of this is going to a lot of the results have the potential to depend on network topology? So if I'm running on a different network than you my results might be dramatically different And and this leads to you know possibly gaming the system where my blockchain might be worse than yours But if I can somehow manipulate the topology to my advantage, I can claim better results I don't have to do this through the hyper ledger I can just do this on my own projects and I can say like oh caliper says You know my blockchain is is better so So I mean that's kind of the the drawback. I think or that that's the concern everybody has In my opinion on this is we should recommend strongly that people People test out for themselves that they set up blockchains and they run caliper, you know on their application Before making a final decision. Yes That's exactly what I'm thinking about Sorry Sorry, I'm not happy to go ahead. So thank you. It's Marta. How about we built into Calipel a part as a part of requirement or as a part of the framework or tool that Together with the results of the framework or to together of the measure results of the measurements metadata is published which means you know what What settings were used what parameters were used and so on and so forth so that you can't really publish The sole results, but it always is attached. We have always attached the whole setting of How how the tool was set Yeah, and that's where I was hoping the Performance scale working group would get to is defining a lot of these things We can put in the software license that you can't publish results without including the output of everything That we generate things like that So that you know people need to fully disclose what we feel is important I agree if people want to publish the result It should also provide the configuration fires and all the description of the system on the test and so that others can make this configuration fires to to Reproduce If you don't mind guys, let me give me some time I'll play with it and I can give some more feedback It sounds good. It sounds promising. I hear the concerns Let's see if we can if we if it can help and if mark will feel that we can use it And I'll try to see if if I can help as well. Maybe I can be helpful I Am maybe we can keep discussing it by email Yeah, yeah, I saw it no problem no problem I may have questions and if you want to guide me to stop with pleasure at least for some time Yeah, it's pleasure cool So a procedural question for the TSC Last time we discussed the performance and scale working group. They wanted me to add that Working group would oversee projects regarding benchmarking and scalability So it would seem that this would fit the case that PSWG would be overseeing this project in addition to the TSC is that everyone's understanding Yes Sure, I mean I think the the proposal you know highlights the need to cooperate with the PSWG so That makes sense to me. All right sounds good So it sounds like the further discussion will then get moved on to the mailing list where the proposal was originally sent to and Continue collaboration with PSWG Any other Final questions or thoughts before we wrap up for the day hearing none then it sounds like let's give everyone 10 minutes back Thank you everyone Thank you