 Hey, Chris. Hey, Chris. We have quorum at this point. Eight of the 11 TSC members are here, so we're ready to move forward when you are. Hello. Chris, are you there? Hi. Yeah, I'm here. Can you hear me? Yeah, we can hear you. So we have quorum at this point when you're ready to move forward. Yeah, I'm just getting hooked up. I couldn't hear any sound. Gotcha. Yes, we can. You can hear you, Chris. Technology sucks. Bluetooth never connects to what you want it to connect to. Okie dokie. Ok, so we have quorum. Is that right? Yes, we do. First step is action item review. That'll be followed by is Bishop on the call? Yeah, I see Bishop on the call. Yes. Ok, I don't hear him, but hopefully he'll speak up when he's reviewing his proposal. So you've got a proposal for another project to be incubated. Have a hackathon sort of readout. We can spend a few minutes just talking about the hackathon and finding for the next one. And then we'll have workgroup updates at the end, unless there's any other agenda items people want to add. Let's keep going. So TSC representation policy draft. That was me and it's not done. So 20 lashes with a wet noodle for me. Dave, are you on the call and how are you doing with the light paper? Hi, Chris. I'm here. Yeah, so the working group, we've been spending a lot of time. Everyone's working really hard. We've made some very good progress on the working draft. If you take a look at the working draft, you can see that we've committed a bunch of updates to it. And Chris, I know you had mentioned that there's a board meeting coming up Monday. And we did our best to kind of get through the whole thing where we felt it was a level of quality that's good enough for a draft one. But there's still a couple of sections that we wanted to spend a little bit more time on. And also we want to make sure that the TSC has a chance to review what we've put together before it goes in front of the board for feedback and comment. So again, we've got a lot of good progress in there. What our goal now is to actually finish that draft, those last final sections, and generate the PDF that we're going to call draft 1.0. We're going to do that by end of day on Wednesday. And hopefully everyone will get a chance to read through it. And maybe you can put us on the agenda for next Thursday to just spend a little bit of time talking about it, or if anyone wants to bring up any feedback in the group on what was put in there. But that's basically where we are. We got through a lot of the rationale. We removed a lot of the implementation details that were in the earlier draft. And just putting in the final touches on a couple of other sections. And then hopefully everyone will be able to have a chance to take a look at it and start providing some feedback. Okay. Thanks. Place again. Next up was setting up the Sawtooth Lake repose, and I see that's been done. Thank you. Yep, Mr. Jones got us set up, so thanks for that. I think there's a little bit of things. I added the incubation and the pull request to all the repose. But it'd be, I think, good if we could take a pass through and just make sure that we clean up stuff that specifically sort of references Intel. I mean, it's not, obviously, we're very thankful for the contribution, but I think it's important that we try and sort of make this about Hyperledger. Right, so if we can do that. Yeah, we can take a look through there. I think what he did is he just pulled all the repose over as is from the Intel product. Should it be renamed Hyperledger Sawtooth? No, no, no. The name is fine. It's just that there was a few references to Intel in some of the read knees and stuff that we probably want to point to Hyperledger project and mailing this and so forth. Yeah, I'll go take a look at your pull request and we'll get somebody to look through and filter out the docs for things that aren't relevant in Hyperledger. And then the other thing that struck me is I know that, you know, I mean, you guys did a really nice job with the documentation, and it's out there under Intel-edger, GitHub-IO, and so forth. And I'm wondering, you know, if, well, so I know that on the fabric project, we're also interested in using a similar approach of using GitHub pages to publish the doc. So I think we just need to come up with a strategy that we can do for all the Hyperledger docs consistently. I think that would be a good thing. Yeah, yeah, I think that would be good too. I think it's convenient for people when they come land on the GitHub site that they can follow that auto-generated link to see all the documentation. So I'm pretty sure we can make a top-level docs repo or something that can point out to each incubated project. Yeah, yeah, I think that would be good. So it's just something I think, I don't think it's, it's not on the critical path, certainly, but it's something I think we should look at. The next item up was to create the Fabric API repo and annotate the read-me, and that was done. Thank you, Tamash. Exit criteria discussion. So we didn't really have a quorum of TSC members at the hackathon, so there wasn't really an opportunity for us to get together and talk about the exit criteria. But, Todd, maybe we just need to find another doodle poll and see if we can't get a call going. Although, a couple of weeks are going to be crazy for me because it's conferencing that OSCOT and then Cloud Foundry for the next two weeks. I'll have the coordinate schedules. Thanks. And just as an FYI, we had some discussion on the mailing list and also in Slack about, you know, creating a little bit more granularity around some of our mailing lists so that we could have targeted discussions around sawtoothlake, around fabric, or just generalized, you know, technical discussions. And so we've created some new mailing lists and I think Rai posted those to the various mailing lists and also to Slack. Rai, correct me if I'm wrong, I think hopefully I'll capture them all, but there's now a hyperledger fabric, a hyperledger sawtoothlake, a hyperledger announce. Oh God, what else am I missing? There's two others, right? Also just a general discuss channel for more broad topics, not as deeply technical as the technical discuss list. Anyway, so those lists are there for subscription. And again, the announced one is probably going to be the low traffic one. So if people are interested in following sort of what's going on but without the noise of everyday pull requests and so forth, then subscribing to announce and that's where we should be posting, you know, information about, you know, major milestones or, you know, published releases and that sort of thing, backward compatibility issues and so forth. Okay, so that's enough of that for the action items. So next up is Bishop and he's got a proposal to incubate a project that was designed to sort of exercise the hyperledger fabric. Bishop, do you want to make us through that? Yes, okay. So I'm not, I don't have the screen sharing enabled, so I'm not sure what you guys are, what anyone is seeing. Since a few months ago, inside IBM, I guess for my own work actually I started a project of creating what would internally in IBM we call exercises for the hyperledger fabric. So an exerciser is an application or a set of applications that exercise the behavior of something but don't really represent a real end application. And we've been using this internally for both for correctness checking of the hyperledger fabric as well as for benchmarking and performance characterization and, you know, characterization between different systems. It's, a lot of people found it useful, so I've asked, I asked internally, excuse me, I asked internally for permission to release the code and Chris suggested that I make a proposal to actually include it in some ways part of the hyperledger or hyperledger fabric repository. So I created the proposal document that's out there. Not, there's been one comment on it that was, or just a kind of a general comment. The basic thing that I'm proposing to add is a, as I said in the document, it's really kind of, it's a philosophy that has currently an executable specification or an executable implementation of this philosophy. And that is that a good way to test these complex protocol systems with complex protocols is to create comprehensive highly randomized self-checking exercisers or irritators we call them sometimes as well. And so this busy work proposal has currently one of those implemented. It has one chain code with a driver script, but it's highly configurable. It's designed to allow you to test the, and stretch the specification in various ways. The number of clients that are driving the network, the sizes of the transactions, the sizes of the data that's being manipulated and so on. And so that I would just like to contribute this and continue to work on it. I think it, and I think that, in the hope that people will find it useful. For example, it would be, I think some of these tests would be very good to include as performance regressions in the continuous integration infrastructure. So that we would make sure that the pull requests are coming in that are somehow impacting performance. It's also, the tests even so far have also helped uncover a lot of issues in not just bugs, but really more even specification issues of what the hyperledger fabric is supposed to do. You know, based on the understanding of the exercises, size versus what we actually observe. The, I guess, the implementation currently, the chain codes are in go, the drivers are in tickle, that just happens to be my personal favorite programming language. But there's no reason why people could not add drivers that were written in other languages or chain codes written in other languages. I think to me the most important thing is just, again, the philosophy that a place to, for like-minded people to put these randomized exercises to help drive the correctness and performance of the hyperledger fabric. I guess that's all, that's the end of my remarks. Thanks, just curious. Could you maybe relate it to something like a load runner or maybe talk a little bit more about the, how you would create a workload to drive through there? So I'm not familiar with load runner. How do you create a workload? So the process is something like this. I guess you think of some feature or group of features that you want to test. Let's say, for example, the next one that might come up would be something like chain codes that verify the signatures of the people wanting to run them. The idea would be to create a chain code that, in as many different ways as possible, could exercise that feature of checking signatures. And then create a driver to drive it in as many, again, as many randomized ways as possible to try to check and hopefully hit all the corners of those features. Okay, I'm just trying to make sure I get a right sense of how you see busy work being used. And so I'm getting one aspect which sounds a little bit like a kind of a code coverage kind of thing and it also sounded like it could be used to do load generation. Is it both of those things or more one of them? It's both of those things. The exercises that exist currently are, they pretty much run flat out, so they generate a constant steady load. However, they could be configurable to pace the load and actually there are parameters in there that allow you to pace the load. For example, you can either run in transactions flat out or you can have the driver create a burst of say 100 transactions, wait for them all to be committed and then create another burst. And that's all configurable. I think in terms of benchmarks, like a standard benchmarks, artificial applications may not be the best for standard benchmarks. And similarly, a benchmark that is like a real blockchain application probably isn't going to have the same kind of code coverage that one of these randomized exercises would have. So they can complement each other. Maybe this is more like a microbenchmark or something like that. Okay, and right now, would it be correct to think of it as it's generating parallel load like you launch a bunch of independent sockets that are simultaneously generating load or is it more of like a single threaded kind of driver? It has a multi-threaded driver. You specify the number of clients and they all run in parallel exercising. And again, the self-checking is a really critical part of it. And what's implemented here now, every transaction, the chain code itself understands its context, what context it expects. So it checks that every transaction has been executed in the correct context. At the end of the run, it checks that all the transactions get committed. It checks that all the blockchains are identical. At the end of the run, it checks there's no duplicates. So anything I can think of to check, I'm checking in there as well. Right now, because it's easiest to set up these networks with Docker compose. So everything is running on one system, but it would be a very small modification to modify it such that the network and the driver system could be different. Or even that the drivers could be running on multiple systems if that was helpful. Sounds good. Thanks. Thank you. Any other questions for Bishop? I see a question online. Can this be ported to X or size sawtooth lake? I want to say no, but not in a hard sense. This is really what's there now as an infrastructure for the hyperledger fabric. So it has the idea of a chain code in the way that the hyperledger fabric has and the way you deploy chain codes and invoke them and so on. So the idea, I'm sure, is portable, but probably none of the code would be portable unless sawtooth lake had those same kind of concepts. Ben or Sheehan or Greg or on the call? This has been, I hope you can hear me on the IP network here. I agree with the analysis there from Bishop using this has been very beneficial to the fabric and been able to drive out a number of scenarios and discover a number of difficult bugs. So either this or something similar to it is really necessary for us to have a quality code base. So I would support this project. Thanks. Any other? Bishop, I have a question. This goes back to your comment about code versus concept. Do you have a separate document that describes conceptually what you're trying to test? Again, I think there's a question that Dan was asking is this really about test coverage or is it about a canonical workload that would allow us to begin sort of unifying the two fabrics and identifying commonalities? Well, I wasn't conceiving it as the latter as a unifying thing. It is more about test coverage and specification elucidation of the hyperledger fabric. In terms of, I think the proposal document is probably the highest level document that describes what it's trying to do. The existing code is documented but it's more of a user documentation, how you run it, how the chain codes work, things like that. There's an online question, are results from busy work runs against fabric available anywhere? There are some issues that include these runs. 996 is one that you'll see a lot of charts in there from data that was collected from busy work looking at the performance of the security infrastructure, certificate infrastructure of the hyperledger fabric. Let's see, a good list of common performance. For a good list of common performance and review criteria, where would that list be put, do you think? What does that mean exactly? I would just like to see, you mentioned a whole bunch of things that could be generalized to apply to any blockchain. If you had in the Wiki, here are 30 things that I tested in a generic way that might be applicable for making sure that somebody who does, say, at Sawtooth Lake list might go, we've got 25 of those, but those five that you listed there, we didn't think about, so we can add them in. One of the discussions in the architecture group is around different performance characteristics of different consensus algorithms. At some point, we're going to need to start talking about Intel ledger with the default consensus mechanism is for these sizes of federations, and we've tested it with 8, we've tested it with 12, and here's what the performance is for those, stuff like that. So, Christopher, I don't disagree, and this is open source, so this isn't the end of it, this is the beginning of it. If it needs to evolve to accommodate a certain set of things, we can do that. No, I'm just talking about, I mean, that's longer term. What I'm hoping to hear is that if we can just get into the Wiki, the 30 categories of things that he is testing in the fabric, that would just be really useful to make sure that when we create similar tools for other incubated things that we do the same. Any other questions? Is there any disagreement to adding this to the fabric? I have a question. Will this be added as the top level hyper ledger project or as part of the fabric? I believe that it's going to be integrated into the fabric. Bishop, correct me if I'm wrong, is that the plan? That's up to you and Ben and Shane. Obviously, if it could be used to test every implementation of the chain, including Saltus Lake and other potential future proposals, then obviously it's the top level. If not, then it probably should be part of the fabric. I think as Bishop said that most of the code is not necessarily generalizable to other means specific to how the fabric works, but that isn't to say that we couldn't take some of the design aspects of it and develop something as intended to be more general purpose. Once it's in, we can fork it into a separate repo and work on it independently. That makes sense. Once again, any objections to incubating this in the fabric? I'm hearing none. I guess that means it's approved. We can record that. Thank you. Bishop, why don't you work with Ben, Shane and Greg and see about getting that integrated. Okay, great. Thank you. Next up is the hackathon readouts and feedback. I'll just start with, I thought that once again it was a good opportunity for everybody to get together. Todd, I'm not sure of the actual turnout. We had a recorded number of actual people in attendance, but it seemed like about 60 again. Yeah, I was going to say about 65 is really strong turnout. 65, great. So we had really good turnout. There was, I think, lots of good collaboration. The guys from Consensus and BlockApps came and we were talking about integrating the EVM into Sautu Lake and into the fabric. So there was some, I think, really, really good collaboration going on there. Discussion about, you know, different approaches that might be taken and so forth. And but just, you know, again, sharing what each of these things is all about. I think that's, it's an important step to getting better understanding across the different projects. So that was great. And then we had workgroup meetings for identity and architecture, and I'll leave that to Chris and to Ram to sort of give an update, you know, coming out of that. But it seemed like, pardon me, it seemed like those workgroup meetings went really well also. And then we had, you know, we had some ongoing work, I know, between Sheehan and Tamash on doing the sort of the last leg of bringing over the code from digital assets and getting that repost set up. There's, I think, still a little bit of work to do, but it's getting there. I think the last leg is getting the continuous integration stuff integrated, but we're really waiting for Jenkins on that. And then there was a table of people working on, you know, their first pull request and fixing bugs or writing chain code and Morali from IBM was that group. And I think there was, seemed to be a lot of good progress is made there as well. And certainly we had a bump in the number of contributors, so that's a good thing. And there was discussion about a possible project which I think is going to be brought forward in the next week or so. So I think all in all, you know, from my perspective it was great. We had t-shirts, yay, which I think were very well received. And so thank you to Todd and the team for pulling that together. And I'll leave it next to Chris and he can talk about the identity work group meetings that we had on Thursday. Chris? Thank you. So basically on Thursday we had the face-to-face for the identity working group. And I think we had about 40, 43 people, including some people who just came for the identity working group discussion. I was very pleased with the results. We have captured them in the Wiki. I just posted the raw notes for the sessions are off of that particular Wiki page. We've decided to sort of divide our future discussions into sort of six, seven broad areas, one of which is Commons and Principles, Federation's nature of Federation identity as opposed to individual identity and the issues there. Fiduciary code and signing is the third category. Dealing with identity failures, everything from individual key loss to what happens with different kinds of auditing and other things. A little bit more in depth in confidential and privacy in particular around things like selective disclosure and confidential transactions and how that applies to privacy. The sixth category is how do we interact with existing identity systems, the legacy problem. And then finally there was some further discussion about a special category about visioning the future a little bit more broadly. So the plan basically is to try to prioritize some of these topics and get them into the Wiki and move things forward. To that end as an example, we asked the identity team at IBM who did membership services to share with us a little bit more about their approach to the membership services. They did a human job in like a couple of hours to try to put together a little presentation for us and they presented. The goal was not so much that that would be the final presentation because it was a rush, but it did allow us to ask some questions and other things. And their plan is that the next architecture, excuse me, next identity meeting, which is next Wednesday, that they will present it again and that one we will share more broadly about how identity is accomplished using membership services. So that's it for the identity working group. Alright, thanks Chris. Ram, you on? Hi, can you hear me? Hi, can you hear me? Thanks Chris, yes. So we had a very good session of the architecture work group on Friday AM. We had quite a bit of turnout in person in the room and luckily we were able to arrange a remote call in as well. So we had a few folks join over the phone. And we made good progress in discussing the consensus module layer, if you will, and both in terms of the functionality of the consensus layer as well as the interface between that and the other functions in the architecture. So we talked about two different families of consensus protocol and our ideal goal is to kind of have an interface that will allow both of those. The notes from the meeting are posted in the Google Docs and they're in the Slack channel as well if anyone is interested. They're the kind of raw notes but hopefully the community can kind of help clean up and make them a little better. So the two consensus families, if you will, that we talked about, one is the explicit voting BFT, PVFT style family with quorum voting and the implicit voting which is more like a lottery. So that's the proof of style and the intel poet style consensus protocols and algorithms, if you will. And so the goal was to kind of see whether we can accommodate both of those with a unified API and a unified functional description. So some of the outstanding issues that we want to address in the next meeting are, so we had a very healthy discussion but we did quite converge on some issues like whether we can actually agree on the results. So the consensus is going to lead to a consistent state of the ledger, if you will, in terms of the results because there's a dependency on the smart contract layer being deterministic. And so whether we needed to kind of, so then it becomes a question of what is the dependency between the consensus layer and the smart contract layer. So ideally we would want to isolate those to the extent possible. So we have a small group of volunteers who are going to flesh out some of the details in terms of how we want to flesh out the details on the what, the functional description of the consensus module, whether it needs to include some dependency on the smart contract layer, what those need to be. So we have a small subset of volunteers who kind of decided to work offline and come back with the proposal that we will look at in our next meeting. Our next meeting will be on about two weeks from now, Wednesday 25th. So the only thing I'd like to add is Vitalik did not show up for our main session but he did come after and we had a smaller subset of us had a good discussion with him. And he said he would volunteer some of the Ethereum folks to come participate in our architecture discussions. So looking forward to having their point of view as well, especially on some other topics like the relay. Because one of the other issues that we have kind of lined up the architecture work group is interworking between ledgers. And it will be interesting to get their input as well, starting with interworking with Ethereum with the same model we should use for any interworking between ledgers. That's pretty much all I had. Thanks, Ron. So any other thoughts from anyone else that attended the hackathon? Yeah, I wasn't able to make it there myself, this is Dan, but we were able to send a few of our developers and they had a really productive time there. Thanks to the consensus guys for putting in some thoughts to help get a form of integration between a couple of blockchains that way. So what they hacked up was some communication between an Ethereum chain and a sawtooth chain. And in order to keep things constrained for the hackathon, what they did was just a little game. And it allows you to guess the value at an Ethereum address and then it's able to go out and validate that against the Ethereum chain. So that code is in a repo right now that we plan to get pushed up into the sawtooth stuff over in the Hyperledger GitHub. Cool. Thanks, Dan. Any other thoughts on the hackathon? I mean, one of the things that I'd like to get a sense from people is personally I think they're worthwhile. I think it's important whenever you start something new that people get an opportunity to get together, meet face to face, have a beer in the evening, and get beyond the fact that some of us are fierce competitors because in the context of this work here we're, I think, trying to do our best to collaborate and put a lot of those kinds of things aside. So I think that that is an important part of it. And so I think I might like to see a little bit more hacking and a little bit less just the talking, but they're both important. But what do people think? Because I think the thinking that I had was we would do this for three, four months and then sort of take another think. But I'd like to get a sense from people as to, and again, one of the things that I'm a little bit concerned about is the TSC isn't actually attending this, which I think is something that we should be trying to do. But what do people think? Are these valuable that we should continue? Because I think the next item was going to be Todd potentially proposing that we have another one in June. So probably on the West Coast, since we've had two in New York City, we probably have one around San Francisco. But what do people think? Is this a valuable thing? Is it a waste of time? What could we do to make them better? Hey Chris, this is Morali from DTCC. So I would say these sessions are really helpful. And I think it's in the hyperledger in the main pages that part of these sessions is having these workgroups is great. But the other part, like you said, is if the code that should speak, and I think DTCC believes in that, and I would like to thank IBM and all the folks there who are more than willing to help us to get up to speed. So from our perspective, these sessions are immensely helpful to connect with the folks who are part of this hyperledger fabric. And other than participating in these workgroup sessions, we look forward to contributing in code. And like it says, let the code speak. So I would say immensely helpful and we should have this going on a monthly or a bimonthly basis. Okay, thank you. Any other thoughts? Can we do it on the West Coast next time? Yeah, I think it'll be the West Coast, the next one. Todd, I think we had a couple of offers out there, but we'll talk about that in a moment. So Chris, this has been, I can speak up from my experience in the past three sessions from IBM. Going there, we spent a lot of time on fabric, but at meetings like this, it really opened my eyes to look at other technologies from other code bases. Especially the little group that I participated in, a couple of tables that we have there involving folks from Ethereum and Sartuth Intel folks and talk to them about potential integration or collaboration and self-discovery that we also need to do things a little bit different in order to allow this kind of integration to happen easily because today it's quite difficult. And also talking to folks, especially Sean from Sartuth Project, we quickly realized that protocol is what we need to start from, even though we start from two different projects, but the messages that we send between components, for example, to create a transaction are quite similar. So perhaps we should start from that and document that for a hyperledger project. And who knows, maybe we could propose that as a blockchain standard messaging protocol. Whether it's a working group or a subgroup, I have some feedback from Chris Allen and from Tomas that it should be a subgroup of the architecture group. That is fine. I just want to gather some folks to start hammering on this and see if we can come together with a common protocol between different projects. So I think it's very, very helpful for me personally. Thanks, Ben. Other thoughts? Hello, this is Primrose speaking. Hi, Primrose. Hi. I know we tried once to get some remote, I think the very first time to get some remote access working and it didn't really work out, but it might be good to keep trying to see what's actually possible because it's not possible for most of us to turn up. So if it's going to be every month, you actually need to start missing out on a lot. Yeah. So remote participation, it would be a little bit awkward because it's not like, you know, I mean, I suppose we could try it for some of the workgroup meetings. But the way that these things have been configured, it's all in one big room. It's kind of noisy and we do have a breakout room this time. So, you know, that might have worked out, but in the past it was just one big room and, you know, there's a lot of socialization going on in addition to, you know, people sort of working heads down in addition to whatever workgroup needs were going on. So I'm not sure that it necessarily meant itself to remote participation. You know, we can certainly find ways of maybe doing a better job of leveraging Slack to sort of keep a running commentary in Slack about, you know, what the discussion topics are and to have sort of the back channel discussion on Slack or on IRC, and a lot of times people use IRC for this. Another tool that people use is EtherPads. You know, in OpenStack, the EtherPads for that enables people to sort of participate remotely. But again, you know, we can certainly try, but I'm just not sure that it necessarily meant itself to that kind of remote participation. Chris? Yeah. This is Chris Allen. I've had some good success in the past with a one or two day, I would suggest one to start, remote haphazones. You pick a day, you deliberately do not make it face-to-face, but you leave like a, you know, a hangout or some kind of easier, you know, where everybody can host type of video, and everybody just sort of commits, hey, we're going to spend the day, you know, from, you know, Europe to Japan. You know, online, trying to help each other, trying out different things, asking questions, and knowing that at any point you can basically, you know, pop into the hangout. Unfortunately, WebEx and some of those tools don't work great for it. It needs to be something that's a little bit more open video, but I've had good results with it. Yeah. That might not be a bad idea to try out. We could certainly think about that. I know there's been a lot of travel, and this is conference season. We might, you know, maybe we could make do or something like that, even more near term. Other thoughts? Good idea, Chris. It would be perhaps beneficial if you would have a regular schedule for these face-to-face meetings, so that people who are traveling to these meetings from far abroad can plan accordingly. Yeah, if we had a September one picked out now, then we might be able to get some of the international people or something of that nature. Whereas, you know, I mean, I like the idea of a June West Coast. It'd be certainly super easy for me. On the other hand, I'm looking at my June schedule and going, oh my gosh, you know, it's 80% likely you're going to collide with one of my other commitments. So the farther out you can plan these, the better. I think everybody agrees with that. Well, the first one was obvious, then the second one, you know, we sort of said, oh my god, we have to schedule another one, and they creep up pretty fast. You know, we could think about, you know, doing the next one maybe in July and doing a virtual one sometime in between, or maybe multiple virtual ones. I kind of like that idea, and then that gives people a little bit more time to plan, although July, again, it's summer vacation time, and you know, people in Europe are probably all going to Spain or wherever they don't live. Other people in the States will have the floor and so forth. So as I agree, Chris, I mean, I know my schedule in June is ridiculous. So maybe we should do that. Maybe we should have the next one be virtual, and then we can plan something in San Francisco for maybe July. There are some other things, but maybe we should put up two doodle polls, Todd. Just one for the virtual, you know, pick a couple of dates, and maybe we can have it, you know, just be a day initially, or, you know, we can hack around the clock or something like that. I don't know what we want to call it, but... and then another one for potential dates for July in San Francisco. Something good. The plan. All right. Well, thanks, everyone. All right, so we're in Miami. So requirements. So the next is the workgroup updates. And we've heard from Ron. We've heard from Christopher and from Dave as well in terms of the status of those particular efforts. You know, as many of you know, we have a retirement in our community. And Patrick has retired from Intel after 30 some years, and so it sounds like he had a great career. And now he's, you know, getting some well-deserved R&R. And so, you know, obviously, I think I'd like to personally thank Patrick in absentia and Intel for his leadership and his efforts in getting the requirements work in these cases up off the ground and to a good start. But obviously with his departure, we are a little bit rudderless in the requirements workgroup. And, you know, so I was starting to look around for, you know, who might be willing to sort of step in and pick up where Patrick left off. And Oleg approached me at the hackathon with a lot of great ideas about how, you know, we might sort of re-energize and refocus through the requirements group and get that moving along to the point where we could actually bring some requirements forward to the TSC for review. So Oleg, giving you an opportunity to sort of, you know, make the case of some of the ideas you have Yes, thank you, Chris. Thank you. I'm Oleg Abrashitov. I'm with Altoros. Some of you may know me from face-to-face meetings and the hackathons. The ideas are rather simple. The challenge that we have, first of all, is the depth to which the use cases have been developed so far. We have 33 use cases in our catalog, and only maybe two or three of them have been developed to the template that we have. Most of them just have been penciled in, maybe just a paragraph of text. So in my view, we need to really own the use cases and work them into a template. We need to normalize them. So I've taken the initiative and I will go through use cases, which I understand fully, and I will make the conform to a template. Those use cases that are clearly owned by someone else, I will persuade these regional authors to rewrite them so that at the end we have a collection of use cases that will conform to the same template so that we can compare them sort of side-by-side so that they're normalized, so that we can see common patterns in them. So that's the first idea to own the use cases. This isn't the scope of the use cases that we have. Right now we have only financial and non-financial, and in my view, entire industries have not been covered, like shipping or insurance, even other things, or compliance. So the idea that I have is to reach out to the wider audience of project members. We have over 1,000 people on Slack. I ran a script through their emails and there are about 130 unique domain names, and if you analyze the domain names, you'll find very interesting newcomers, like people from shipping, a marine classification company. So we can reach out to them to solicit new use cases. I put out a Google Form so that we can circulate it to a wider community and solicit use cases from them. So to widen the scope of use cases, reach out, that's my second idea. And the third idea, in my view, there should be some kind of KPI for the group in the form of maybe a monthly report and, most importantly, a feedback mechanism to the architecture group. The requirements need to constantly feed the architecture in the identity group. So I will work with the architecture group as to the form of that report. I will work within the requirements group as well. So that's my third idea, to have some kind of constant feedback in the form of a formal report. This is the new use cases that we've discovered. Those are the common patterns that we see. And let's see if they can challenge your architectural decisions. So these ideas, in my view, are obvious. I discussed them with some of the key members of the project. So within the group I have general agreement. They just need to be implemented. So I will take care of normalizing the use cases and rewriting them to the template, working with other group members. And I will ask everybody's help and support as to getting new use cases from wider community. So that's it for us. At the last meeting, Christopher Allen also worked through his great contribution to the template as to the technical requirements. So we discussed that. And that's the state where we are with the requirements group. We did also talk a little bit about the life cycle of use cases. And we basically have three levels. Level one to not be intimidating is a relatively small set of questions. We're asking people to submit. Oleg has created a Google form to make it easier for people to submit sort of the first level of that. And then we have kind of an exit criteria for that those have to be complete and normalized to move to the next phase, which is to really fill out the requirements and such. And in that phase, we would really like to, you know, move to, I mean, what we've begun to discover that's been part of the challenge for the requirements group and this came up at the face-to-face is that some people have told me, well, you know, we have use cases, but we can't share them with you because we're under NDA with our clients. So, you know, we're having some difficulty in getting some of the real nitty-gritty details that we need. So the suggestion is that, you know, each month or, you know, we'll submit two or three that are at, you know, relatively complete stage two to the TSC for them to reach out to the members and their teams to comment on and add to. But we really, you know, we need to, you know, prioritize a few of these, you know, say, okay, here's two use cases, TSC and board, you know, work with your organizations to help us finish this to get these to level three. That's it. I finished. I'm sorry, I'm talking to you. Going for about five minutes there. Anyway, I was just going to say, so thank you both. You know, it sounds like you had some good discussion with the hackathon, and you know, saying, you know, this is open source project after all and one of the sort of the tenets of open sources that, you know, if you have an Ips scratch it. And, you know, I've, you know, in sort of getting, you know, Chris came forward, you know, with the idea of the identity work group and I said, cool, let's run with that, right? Patrick came forward with the idea for a requirement to use cases working group. He did a great job. You know, if somebody has an interest, you know, and Rob came forward from an architecture perspective, same thing. In driving something, I think, you know, as long as it's within the sort of the scope of what we're all about here, I think that's a good thing. And I'm always, I look for people that are sort of self-starters and self-motivators in just my day-to-day job that's kind of individuals I like to work with. So, you know, from my perspective, I think, you know, like you've got some good ideas here to sort of move this forward. I mean, unless anybody has any objections, I'd be happy to have you lead the work, the requirements group. Thank you. To put that more formally, does anybody have any objections? I'm seeing a lot of plus ones on the list on the chat here, but I'm not hearing any objections. So, Oleg, I think the reins are yours, and we look forward to hearing from you on the TSC goals coming forward. Thank you. Thank you for your support. Okay. So, finally, is the CI sort of workgroup that really isn't a workgroup yet because we still don't have CI. I hear from a little birdie that Jenkins and Garrett are right around the corner. I've actually seen the Garrett server up. We have a little bit of transition planning for Garrett. That's a little bit more involved, and we probably need a little bit of training. And then Jenkins, I understand, is also something that the infrastructure guys will be working on and getting to us very soon. And so then we'll be sort of in full mode of transitioning over to use of Jenkins. And so that would be both the Sawtooth Lake migrating the fabric from Travis to Jenkins, so the porting of the scripts to Jenkins. Pardon me. And then also the Fabric API to Mosh and his team had contributed, and they have a bunch of Jenkins CI for that, so it's a matter of pulling all that together. So once we have the Jenkins server up and running, I think we can probably get into full swing with integration across the landscape so that we can start working towards a consistent set of tooling. That's about it for that. I don't know, Todd, if you have any other updates on the infrastructure. No, that's really it right now. Okay. So that's about all I have from an agenda perspective unless there are other thoughts. We can give people 20 minutes back. Great. Well, then thanks, everyone, and we'll talk to you all next week, if not in the Slack and the Mail list before then. Thanks, everyone. Thanks, Chris. Thank you. Thanks, Chris. Thanks, everyone.