 So, this morning, good morning, everybody, for the agenda today, we have the HACFS planning. We had an email ballot, I should say, on the amendment to the process for counting contributors. So we'll review that and then we need to get each of the workgroup chairs to basically provide Todd and team with the list of contributors so that we can make that public and if people want to contest that, they may. And then we have the performance and scaling working group, Mark is going to review with us the proposed charter that the team has come up with. And if we have a quorum, we'll have a vote. Is there anything else people would like on the agenda today? Hearing nothing, I think we can start. So Todd. All right, I'll move through the first two topics pretty quickly. They're really just FYIs. So on the HACFS planning side, no real update. We are still looking September 21st and 22nd in Chicago, just trying to finalize that spot. We'll let you know as soon as that gets locked down. On the Europe side, we do have a doodle pullout for that still. It's looking like no strong preference between the five weeks that we laid out. So if you do have a preference, please be sure to get that in there ASAP. Otherwise, we're going to move to lock in a location for that. It might be worthwhile, Todd, just to send out a reminder and deadline for closing the poll. Yeah, people. I mean, I suspect that part of the problem with getting people respond is everybody's out on a holiday. So yep, yeah, we'll do. Thanks. All right. And then onward from there, the TSC election process for 2017. So let me just drop this link in. So the only thing that got added was there was an email vote last week, like Chris said, and that is really to include the any of the work group contributors. So we put that verbiage in there just as an FYI. I will be reaching out to all of the work group leads over the next week or so to collect any names from them, from people they believe to be contributors. So in the doc that I just dropped into the chat window, there is a link in there in the middle of it that shows the master list of all the contributors, maintainers and whatnot. So we will be continuing to update that until September 9th. Sorry, August 9th at 5 p.m., which will be the cutoff. We will do a final review in the TSC call on August 10th in the morning of that list and then kick off the nomination process and ultimately the election thereafter, based on the timeline in this document. So over the coming weeks, please be sure to continue to check the list that we are constantly updating. If you do not see yourself on there and should be on there, please reach out to either me or Dave Huseby, who's covering in Tracy Kurtz absence. Any questions there? All right. Yeah, I think that's a comment. Are we going to go through this process every year? Or is this kind of baked into the meaning in terms of the WG contributors? Or should we start moving some of this, especially the material that we produce on to GitHub or some other venue that can be easily be captured? So Chris, I'll defer to you. From my view, every year we'll review an election timeline in the rough process and just call for any objections to ensure that the TSC is happy with the election process because that's really what the overarching charter dictates. In this case, the workgroup contributors, the addition of that was just approved for 2017. The TSC can choose to extend that indefinitely or keep this just for this year. That's really up to the TSC, not something that I or the Linux Foundation can dictate. Yeah, I thought Todd that we, I mean, let me just go back to the thing. I thought that we had just amended the process. On an ongoing basis? Yeah. Okay. So potentially that was my misunderstanding then. Okay. So I will say that I had intended it to amend the process that we would use, which isn't actually written down anywhere, I guess. It's, I mean, aside from what's in the charter. Yep. So maybe VIP into your point, maybe the thing that we should do is to talk to formally document this as part of the process. Yep. Because I agree. I don't think we should be going through this every year. That's so I mean, Todd, I don't know if you want to just work with Brian and figure out what the right way of essentially documenting the process should be. Okay. We can likely just add that to the wiki. I think that would be fine if that's the answer, but I had actually, yeah, I mean, just to be clear, I had intended this to to sort of codify the process that we adopted last year, essentially. Okay. So thanks for bringing that up. Yeah. So if we could get the chairs to each pull together a list of their active contributors and if you have the emails, that's great. Todd, I gather that you have the ability to get emails from the accounts or no. Yeah. So I have in terms of the work group leads. Yes. Yeah, I have all their emails. So I'll be reaching out to them to call for any work group. So when the work group leads each come up with their list. Yep. If they don't have emails, I'm assuming you can chase them down or we'll do our best. Okay. Yeah. So that'll be part of reviewing the list over the coming weeks. So if we don't have someone, we'll call for it. I do think it's important for the work group chairs to each to do that in the next week so that there is time for people to dispute if that's their intent. So if we could have, you know, I don't want to call it a final list, but if we could have the list. Essentially staged, if you will, for the final vote by next week, then that gives people a week to contest it if that's their desire. So am I the one compiling the final list? Should you just have them all forward it to me? This is Dave. That's up to Brian and Todd and you don't know. It's not my call, Dave. Okay. Well, I was just wondering if Tracy was the person who had been. He was the one. Yes. Yeah. Well, then it's me. Oh, okay. So yeah. I was going to say as a working group chair, should I compile the list and send it to Dave and copy my working group mail list on it so people can get get a heads up. That seems reasonable to me. Mark, I think. Yeah. I'll plan to post a master list of names. Like first initial and last name without email addresses so that people can quickly check to see if they're in the list. That way, emails, but I mean, maybe actually that's a good point Dave, maybe. Yeah. But maybe it's different for students in the universities doing, you know, like D.C. or student number here, you know, so I can do first initials and last names, and if they see themselves on the list and they know. Not to pursue it further that we should we have. Spreadsheet then or. Yeah, where is it? I gotta find it. In the link that Todd put in the chat. Okay. So if you go there, then it's, the list is in the middle of the process there. But that's actually, yeah, if you raise a good point there with the email addresses. Okay. All right, I think we've dealt with that. So again, Vip and thanks for bringing up the point on making this the ongoing process until we change it again. And then finally, Mark, you're up with the proposed charter, which I see has been linked into chat. Yes, I had put it in and I think Todd may have just did it and done it as well. Excuse me. So do you want me to run through this or. Yeah, you don't have to read it, but just sort of highlight, you know, the deliverables and so forth, I think. Okay. So this is actually a slight rework of the original proposal back in April and DC that tries to incorporate the feedback we got there when we first formed the group. So the key parts here are it, it's really a, you know, cross project forum for anyone in the distributed ledger technology community. So earlier back in April that we didn't want it limited to just hyper ledger projects. And so to that end, you know, are even in touch with people who run the Enterprise Ethereum Alliance performance and scale group they're not quite started yet, but they're happy to join. And we'll start when they, when they actually have some code they can work on on their end I think. You know, initial work will be the document will be a document or more likely multiple documents which define, you know, we have to do attack so many terms so we all are on the same footing for what the terminology means. For both performance and scalability, and then attribute to metrics of different of different distributed ledgers and technologies. Eventually, you know, this group is not charted according to this document, you know, the charter is not to go off and write a test suite the charter is to be a working group that defines what a test suite might do. And then there'll be, you know, at least one project spun up out of this to go off and actually write code to do a test suite. So this is actually very timely. Discussion because I actually have in my inbox this morning, a request that's suggesting that there's at least one member getting ready to make a proposal for that but I also happen to know that there are others in the wings. There are people that have been playing around with, you know, benchmarking frameworks and so forth. And I'm wondering if I'm wondering if there's been any discussion as to how that might be handled. Should we have the work group sort of look at these and you know should we have one should we have multiples should we know if there are multiples should we try and figure out how we can get to one. Any thoughts on that. For me, I think, you know, we've looked at a couple that people have sent out on the email list. You know, there's a couple of universities that have done work. And we've discussed, you know, the pros and cons of them. One of the things I think I'm trying to get the working group, you know, that we're working towards is going through an identifying three or four key use cases that have different characteristics so financial versus asset management or healthcare that all have, you know, different performance scalability. You know, attributes that they'll look for. So we can make sure that the test suite, you know, and what we define will encompass everything. So, in one hand, you know, I think it's a little premature to go with a test suite already if we haven't really defined what we want the test suite to do. I don't want to hold up implementation of a test suite. So I'm happy, you know, to work with a project. Hopefully the people are part of our performance and scale working group. What other people think out there. So kind of framework will be used because you need a level playing field to compare different different DLT is for the same DLT running different applications for use cases. There has to be. I mean, I know that Brian, for example, last year had come up with the cloud platform that we could use. But I think it was a little premature given that most of the DLTs were under development, heavy development at that time. And now that more things have matured. I get the feeling that if we have a test net stood up or something that is quite uniform, then we could have these kind of tests run. In a way that, you know, that compares the different protocols in a objective way. So that's an interesting point that I think that if. So I think that the the issue of having a standing test net or whatever to test against from a performance and scale perspective would potentially yield. I always say this unfavorable results because I think, you know, if you're going to deploy something for Internet of Things that, you know, in the permission kind of a context the way we are working towards here. It might be configured very differently than something that was just transferring assets from one bank to the next. You know, in one case you're looking at a fire hose and then the other case you're looking at this, you know, constant barrage of events from everywhere, right? So I'm not sure that having a standing test environment is necessarily the right thing to be testing against. I think, again, I think that in many cases it'll be something that based on how it's deployed and configured, you know, may be different given the use case. I don't know if that's been discussed at all in the work group. But then how are you going to publish metrics for any of these things? I think it's kind of that. Go ahead, Dan. Sorry. Dan, go ahead. Yeah, sorry. Can you hear me now? Again. I think that the point of this work group is to resolve those kinds of questions like you were just raising and to the question of what should be the relationship between a project proposal in the performance space and what the charter of the work group is. I think for this work group to be relevant and meaningful, I think it should have a close relationship between what kind of projects come through for performance and which don't. So I think that's kind of where I was right. That's kind of what I was hinting at. We've already talked about that kind of relationship with the architecture working group and the platforms. It's just sort of the architecture working group is trying to look at big problems and how pieces fit together. And in that role, we're providing feedback to all of the platform developers and vice versa. It feels like that should be the same kind of relationship here. Yeah, but so that maybe should be called out in the scope of the work products that this group is meant to play that role specifically and in helping evaluate proposals we may get that Chris hinted at. So we tried to cover that in a vague sense in the last paragraph at the last sentence during the work products group that the PSWG will also review and consult with other working groups and projects is appropriate. Would we want that a little more concrete. Could you send Mark a copy of the or make sure Mark has a copy of the architecture group just so you can see the wording that we used. Yeah, I'll do that. I guess update probably the work group work product section to explicitly say that the work product should include recommendations and for both review and testing and performance testing test suite recommendations for the projects in the group. Okay, I'll go off and look at that. Now one of the things we sort of danced around on this call that we made sure we discussed on our last Tuesday's column, the working group was reviewing this with governance. And when you think of spec or TPC, you know their their governance standards body so that you can't publish a spec number without permission from spec, you know, and it has to get reviewed. The people that were on the working group call felt that that's not the business we want to be in. And so I, you know, we'll throw that out here we, you know, so we didn't put anything in the chatter about that. We'll feel that's the right approach, or do we want to somehow have a, you know, way to validate, which implies a lot more work of anyone's use of the test suite to publish a number. Yeah, that's Brian is not on I take it. That's probably some we we yeah I think that publishing actual numbers from hyper ledger is probably something that we want to avoid. Either because we pissed somebody off or because we get called on it and so forth so I can certainly see you know publishing here's the kinds of measures and how we expect them to be measured kind of a thing. And as being wholly appropriate, but actually publishing results. Maybe not so much. And it's probably something we want to talk about with Brian, and maybe the LF itself, I don't know. I'm taught I'm not aware of any other groups that did do anything like that in LF. I don't know all of them but yeah. We decided it wasn't the business we wanted to be in. Yeah, I agree with you. So let's not go look for trouble. Right, right. I don't know if you want to put that in here anywhere. Yeah, I'll look at adding that to be clear that this is not not a governance body. Those numbers without any kind of attestations from third party or objective. I stand or Oh, so I will tell you what we did so one of the groups that I worked in the past is the web services interoperability work group now they weren't focused on performance and scalability and benchmarking. It was, you know, the organization was put in place to define a set of criteria that you could use to measure a given implementation or set of implementations of web services protocols that would indicate whether or not you were likely to get interoperability. Right, if it passed the test suite, then you could say yes, this is interoperable. And vendors would try to do that. The thing that we did though as we said, we're not going to say, you know, the organization the web services interoperability organization was not going to say, IBM's or Microsoft's or oracles or anybody else's web services engine was or wasn't interoperable. But rather that they would put the test suite out there and say, you consumer can judge for yourself. And so, you know, by putting the thing out there and letting people run the tests themselves and draw their own conclusions. was was felt to be the way to avoid having to deal with the potential that the organization itself would then be held liable for liable for liable. Right. Yeah, I mean, I get, I get that I get the feeling that you don't want to certify something. And most of these, most software products, even the products that are written by people like Microsoft do come with standard disclaimers. Yeah, it's not like they will, you know, if I start trading with a with a with something that that I buy from a vendor. It's not liable. If I start losing huge quantities of money using that software, they'll, you know, they'll, they'll stand by it to a certain degree, but they do never certify it as free of bugs or anything like that. I mean, so we do, you know, we have to balance both views, meaning you obviously are not, you're going to put a standard disclaimer there. And at the same time, you're, you're not just going to say, Oh, XYZ has a protocol called XYZ and they claim it's the most performant based on this running of the test street. I mean, so either there has to be a way to record this in a non controversial way. The results, maybe use a blockchain to capture that, I don't know, you know, the results, which one. Well, anyone that can actually capture the results with a digital signature or something that says, Yes, this did happen and this is what the results where instead of just relying on hearsay. Right. Right. But you are saying that if you say that you, you know, as a consumer, you're supposed to, you know, caveat them tour. I get that. But, you know, then what is the purpose in us producing all this tooling and you know, test suites and all those other stuff maybe can throw out some results as a result of running it. That that'll be non reputable evidence. I don't know. I mean, these are some of anyway, we can talk about it during the actual calls of the performance and working group. Right. And heart made a point, a good point here in the chat window that, you know, maybe to, you know, there's a thing on the code that, you know, in the code license that says if you use this code to publish numbers you have to put, you know, X amount of detail in about how you did it and what changes you made to the code things like that. And that, you know, at that point becomes part of the software license if you will. But I do think the way that you have it framed up now is good that the output of the work group leads to that project, not so much that we need to define what the extents of that project are in order to approve the work group. And then I will accept your suggestion there. It's fine. You know, whatever we have today is fine. Okay. So I'm just so you accepted the change. Okay. Todd, did we ever get to a quorum? No, unfortunately, we're still too shy. Unless Greg Morali or Sheehan have joined and not joined up in the chat window. Yeah, no, we're still. All right. Well, so, Mark, I think it might be worthwhile because I know you guys did go through and, you know, as a work group, you know, vote on this and review it. So it might be worthwhile to sort of take the feedback that we have here and do another pass and then we'll take it maybe to email when you guys think you're done. Next week is the case may be, again, I, summers are tough to get quorum. And so we may have to resort to the email trick. But thanks for bringing this forward. And I guess unless there's any other topics people want to bring up, I think we're very good. All right. Well, thanks, everybody. I'll give you all 25 minutes back. Thanks. Have a good day.