 We have a fairly light agenda here today, but I think we have one proposal and I, Todd, you said we weren't going to vote. I think this is just the initial presentation right of the project reporting proposal. Yeah, I believe so. Tracy, were you were you looking to bring this to vote today or is this more just discussion at this point? I think just discussion at this point. Okay. You know, I think just to intro it. Yeah. And also Tamasha and she kind of joined. So we are actually at quorum at this point. Correct. Well, okay. Well, if everybody is comfortable, we can also vote on it then. Great. So on the agenda today, we have Hackfest planning and just a reminder of what's going to happen next week. And some of us are about to jump on planes and trains and automobiles to get out there. I want to just talk briefly about our drive within the hyperledger fabric team to 1.0. Tracy is going to present proposal for doing project reporting and the template that she's cooked up. And then Dave is going to regale us with where we stand with the plans to have hyperledger sponsored security audit pen testing process that all the projects can use as they get ready to do a major release. Are there any other topics for the agenda? Okay. If not, Todd, you want to. Sure thing. So the Hackfest is going well. We'll be Monday and Tuesday next week in Beijing. We actually have almost 150 registered for that now. So most popular Hackfest to date. Really excited to see the momentum there. If you are planning to attend, please register as soon as possible. That was in the agenda that went out. There is also a wiki. It looks like Bau Hodge has posted that into the chat window. We will run this similarly in the unconference format. Brian will get things kicked off on Monday with some discussion. And then we will have some whiteboards and post-its and whatnot to call for topics that people want to either present on or just discussion general. And we'll get that all laid out over the course of the two days. Any questions there before we talk about future Hackfest? We're looking forward to seeing everyone there. On the future Hackfest, a lot of people have already responded regarding the upcoming U.S. Hackfest, looking at August or September. Right now it's looking more likely for September, but please go fill out the doodle poll. I just dropped into the chat window. We'd like to finalize on this pretty quickly here so that folks can book travel around that. And then looking beyond that, I think last week we were potentially floating the idea of Europe in October-ish timeframe. Interested if there are any thoughts around that, otherwise we can put out a separate doodle poll to try to hone in on something there. If the U.S. one shifts into September, will the European one move out from October? Or will it stay to give them more of a two-month cadence? I think we're flexible. We can look at October and November and kind of see what works for folks. But really what's going to work best for this community is what we'll do. Okay, because I'm not opposed to Octoberfest in Germany. I mean, October in Germany. It's actually in September though. I don't know why they call it Octoberfest because it happens in September. Okay, anything else Todd? Do we know if there are any prospective sponsors? So we have chatted with a few companies that can potentially host. There are actually a couple in Chicago that we're interested in hosting. We could also look at New York, Boston, that as well. So any of the folks on the call, if you have office space that could accommodate a hackfest, please get in touch with me as soon as possible. How many are we expecting? I would say for the U.S. one, probably 80, 80, 80, 90, something like that. Okay, so we can do. It's all for me. All right. Excellent. Thanks, Todd. Okay. Next up is hyperlidra fabric 1.0 and the drive to getting it done. Just as a brief update, basically, we cut our beta release. Last week we had intended to do weekly cadence releases. And we had a discussion yesterday afternoon. And we sort of felt that we weren't quite there with the timing for another release. There wasn't quite enough in it. We were hoping to get some of the security bugs fixed in time and get some of the documentation improved. And get the new process for publishing the binaries and sample apps sorted out. But that didn't happen. So I think we're pushing that off to next week. And then we'll have another decision point on Wednesday as to whether or not we think we're ready for a release candidate or not. Or whether we do a beta two. And a lot of that will obviously be predicated on where are we with the open defect count. What's the rate of things coming in, defects coming in, and so forth. It's getting a lot of uptake. The beta that's out there, it's been out for, well, basically a week today. With the last day of the week and we have about 400 downloads. So that's actually very positive. And so anyway, so that's sort of where we're at. There's a few things from a process perspective. We've established a set of exit criteria that the fabric team is going to be looking at for its 1.0. Others can feel free to beg, borrow and steal that if they like. It's out on the wiki under the fabric project. And we're also, you know, it's starting to engage with some of the various license and crypto export scans and so forth, working with Tracy and Dave on those aspects of things. And obviously those things have to happen. And then, you know, obviously we for some of the non Apache to license dependencies, because we're using go and because you vendor the go code the way that you do with go, we have to ask for, we have to ask the board for an exemption. And so anyway, so I just wanted to sort of let people know that this is happening. And that, you know, as we push towards the next few weeks that the fabric team will certainly be pretty busy and, you know, welcome any, anybody wanting to come in and kick the tires and then hopefully once we cast that, you know, we can focus a little bit more on, you know, some of the things that I know everybody wants to do, you know, we'd like to spend quality time, you know, looking at poet, and we'd like to spend some quality time certainly with borrow on integration of the EEM as chain code and various other things. And so hopefully once we're past this weekend, we can start doing some of that in earnest. So I just want to get people an update that's sort of where we're at. Hopefully, we get something done in the next month or so. Chris, are you seeing an influx of additional issues showing up now that, that it's in the beta and you've had, you know, this 400 downloads. Yeah, now that we're in beta, we're getting a lot more external bug reports coming in. I mean, that's not a, that's actually a good thing because it's not great from a quality perspective, obviously. But we are seeing an influx of defects coming in from outside as well as the, you know, the increased testing that we're putting on it, adding daily and weekly and additional unit and integration tests as we're doing. And so, yeah, I mean, it's not just people downloading it and then throwing it away. They seem to be using it and coming back. And I also noticed that the questions in the chat seem to have increased as well and they're fairly substantive. So I think it's all a good sign. Cool. Very cool. Did you say a little more on the non Apache to dependency issue. Yeah, so we're using go as you know for fabric and fabric CA in particular. And the way that you have to deal with dependent, well, with go, the dependencies need to be in the go path. When you build your binaries. There's, I guess there's two ways we could do it. We could ask everybody that ever wanted to build to go and install, you know, 50 or 100 or whatever it is number dependencies in their environment, so that they can build it. Or you can vendor the dependencies and this means just like with note or, you know, some, you know, with maven we, there's a dependency tree that can be included in your repository. But go doesn't provide a means of dynamically creating this unfortunately. So they're they're actually working on on addressing this but that fix this new dependency management tool go depth isn't going to be in a, in a place where it can be used for for projects that are serious about things until probably q4 of this year they say And so we in our source tree we actually include the vendor tree of all the go dependencies that we have for the two projects. Some of those dependencies are MIT or BSD or MPL licensed. And so for those we have to go up and ask mother may I to proceed it's not any of the code that we include other than the vendor dependencies and the vendor dependencies again they're not modified by us in any way we just, you know, we just download them and stick them in there. And so I've been working with Tracy and with Steve Winslow on on on reconciling some of those things and then we'll pull together a proposal for the board. I was hoping that we could get one done but there was another scan done and it's it's like swatting flies. So there's a few more things I have to address that didn't get licenses put on them unfortunately so anyway. So that's sort of the situation Dan it's it's not like we're, you know, including other licensed stuff or putting other things in that have a different license it's it's really just those vendor dependencies. Once we get past this and start using go depth. We should be able to manage it just like, you know, npm or are made and then have it automatically recreate on build the set of dependencies that you need to use. That helps to get more context right then they won't be in the source tree. I'm done and except then would be Tracy. Thanks Chris. So, just past week I put on the TSC mailing list a proposal for project reporting, specifically to keep the TSC informed of what is happening with the different projects that fall under the hyperledger umbrella. And to make sure that there's some oversight that exists with the community and the code that's happening. You know, for example, our regular releases happening are getting diversity of developers contributors joining and you know those sorts of things so that the proposal is really the project maintainers report on their projects health and status monthly to the TSC. And so with that I've created a template that can be used for that reporting where each each month the project is designating a maintainer to create that report. And the maintainer or the designated maintainer would actually fill out that report and include it to be reviewed by the TSC on a monthly basis. And so the contents of that template are really just what is the project that we're reporting on. What is the health of that project, you know summarizing really, you know, is it is a community healthy. Our questions being answered our contributors acting appropriately. And then do we have new contributors showing up any issues that might exist that you need to be brought to the attention of the TSC, so that they can either specifically address them or just be aware of them. And then, you know what releases have happened in the last month, because regular software releases are assigned as a healthy project. And so the next one would be the overall activity that's happening in the past month. So is any new development happening, or are we just doing bug fixes. What's the the technical changes that the project is actually working on. And then, you know, what are the current plans so are the current plans relate to add new features. When are those features going to be planned, really thinking more about what the roadmap for that project is and and how that's how that's working. And then, if the activity is minimal we need to discuss, are there plans to address that or are we looking to actually take that to more of a deprecated or end of life state. And then the next question really is around the the maintainers and the contributor diversity. So when were maintainers last added. Are we getting new contributors to the project. And then lastly, call for any additional information that the project reporter might feel is important for the TSC to be aware of. And so really just a short questionnaire that would be public on the wiki and would be reviewed by the TSC. And I suggested that maybe the first meeting of the month is is the right time to have that review of the projects. So happy to have discussion on that and see what the TSC thinks. Any questions, comments for Tracy. So I had, I had one. And that, that's the frequency of these reports. And I tended, I mean, and it was, it was funny, you know, so we also put out a, you know, brief blurb on, you know, how things are going for each of the projects currently and how Todd or Min, you know, bundles them up and puts them into the board updates so that we can present sort of how things are going to the board. And, and then there's some derivative of that that gets, you know, munched up and Jessica puts out a blog post to sort of give the broader community an update on how things are going. And I, I know, you know, Dan, you know, it was like, I thought I just did one of these, you know, it just it's it is pretty frequent and and sometimes there's not a whole lot to talk about or it's really just more of the same we're testing and fixing bugs or whatever. And so I think that, you know, maybe monthly is a little bit too frequent and maybe quarterly would be something, you know, because then, you know, there's ability to sort of spot trends and so forth. And the other aspect of it is we could spend a little bit of more quality time in the TSC actually reviewing them and we could even, you know, it, you know, we could even have it so that it's not that everybody does it every quarter but that, you know, we stagger them so that there's not a gazillion of them that we have to go through. So I think they do that in other organization. So I just, you know, get your thoughts on whether you think that, you know, having less frequently than monthly could still achieve the same objective of being able to sort of track how things are going. Yeah, I kind of second that before Tracy has a chance to think about that for a minute. But I know I do end up doing a decent amount of status reporting for a variety of things, including this project that I'm not sure how much they get read so I would prefer to be putting time into, you know, creating capabilities more so than recording so I'd want to be able to balance some of that so this kind of expands this template kind of expands beyond what we had been doing for reporting so it increases that amount of time so if we could cut back on the frequency or cut back on the scope of the code be good. And, you know, the other thing and I reflect, I mean, because I have, you know, the sort of the same problem I have a lot of things I have to tell a lot of people about. And it's unclear exactly how much value there isn't all of that but one of the things that I think is worthwhile is trying to figure out how can we automate some of the aspects and some of the you know you can't automate. Well, I mean, maybe you could with some of the machine learning kind of stuff that we have now but you can't necessarily automate you know how things are going on the mailing lists and whether people are arguing with one another I guess we probably could do something there but certainly, you know, just the project diversity right how many engineers from how many different companies or how many different constituencies are involved in the project. You know, from a month to month perspective, who's who's landing commits who's doing reviews. All of that stuff leaves there's a paper trail that we can actually harvest and, and, you know, we can get detergent or somebody to come up with numbers and tell us how things are going from from that perspective and certainly you can, you know, we can, we can. We can measure releases pretty easily. So I'm just curious if, you know, if there's any way that we could simplify some of this that we're collecting. By just being able to reference that there's a report or, you know, a dashboard or something that people can look at to see how diverse projects are and so forth. Yeah. That's all it's leathered because you stole the word out of my mouth dashboard. We do have to make another dashboard in terms of how the different projects and the dependencies just everything you've said and just wondering if we can extend that dashboard to provide additional metrics that everybody would be interested in. So we don't have to wait every month because then if it's all integrated into the dashboard, people can have a sort of use whenever they want. However, we could look and see how different projects are progressing or not progressing and then forward for early review, get based on what we've seen from the dashboard. So that's always a good approach in automation as much as we can. Sorry about the noise in background, but no, I certainly would support you on the side of that using what you already have in place. Sorry, Tracy. Oh, no problem. No problem. I appreciate the additional thoughts. So definitely we are working on metrics. So let's, let's maybe take that off the table because that is going to be another discussion at some point. We are definitely working with Patergia. I am also doing some scripting to pull information from each of the projects. So that will definitely be happening as well. So, so maybe we can focus on kind of the timing and the those kind of questions and concerns that people have, right? So, you know, I think the question becomes, you know, if we There's concern that people wouldn't be looking at these reports, right? And so that's why I wanted to make the last step of this really the running it through the TSC and making sure that the TSC is really aware of what's going on. Right. So it's that there has to be people who are looking at these things and not just we're reporting for the sake of reporting because I think you're exactly right. Reporting for that just for the sake of making putting something down on paper really isn't useful, but when you're doing it to make sure that a project is healthy and should continue as a project under the hyperledger umbrella. That's where I think the value comes in. So, you know, I think as far as timing goes, the reason that I chose monthly was Chris kind of for the reasons that you stated it, right? Which are that we are really doing these monthly reports for the blogs and for sending out information to the governing board and those sorts of things. And so I was trying to tack on to that schedule such that you wouldn't have to do multiple reports. It could be a single report that is used for multiple purposes. Right. And those purposes being, you know, the TSC being able to understand the project health, the governing board being able to understand what's happened in the last month and see the progress that the different projects are making. And then also to be able to report out to the larger community just what's ongoing. So that's why I chose monthly. You know, I don't really myself have a particular dog in this fight, I guess you can say, right? I think that, you know, they're like you said, when we bring these metrics in, we'll be able to see a lot of the health of the projects and see what's going on. So, you know, I will leave that up to the TSC as a decision point as far as the timing that they think is appropriate. If it's a staggered schedule, you know, that's another option, right? So, but really keep in mind that I was trying to use this for more than just the one purpose and bring in those other things because I think you'll still be asked for those reports for the governing board. You'll still be asked for reports for the blogging, those sorts of things. So, you know, keep that in mind as you're thinking about this. Hey, this is Brian. I didn't don't want to step on anyone else from the TSC wants to also comment, but I wanted to get something out that I feel is important for us to think about, which I wrote into the comments but for those who are just calling in and not connected through go to meeting. I said I felt like the TSC shouldn't really be focused on what can be automated or as easy as to collect. It really needs to think about its oversight role over the projects. You know, I've been fighting really hard to preserve the TSC as the the center of technical governance for the project and to try to keep both the Linux foundation and the governing board from feeling like it has to step in and do that. Just because you know these projects should be predominantly about, you know, pushing forward technology and community at the same time the developers really should be the center of gravity for deciding if people are doing what they should be doing. So, so I think we should be thinking about what is the TSC need to know about a project. How often does it need to know about that if the project has gone gone quiet, which is the more likely scenario than a project getting getting into a tense, you know, fight or something like that. I think we'll know about that as soon as it happens. But the bigger risk is a project kind of goes more bund. And if we have too many of those kind of sitting out there. And does that reflect poorly upon the project reflect poorly upon the TSC, you know, mature and stable and cutting a release once in a while to sweep up bugs is fine. But if it were unresponsive to security notices, or not bringing in, not not responding to pull requests or bringing in new developers when, when they show up at the door, then that's the problem. And that's what the TSC should be in and I believe is empowered to monitor for and and step in if if there's problems there. And so that's really what should drive the question of what do we want reported and how often. Yeah, this is about it. I do. The speaker goes back to the government and TSC. Intertwined. So yes, the level of reporting to our executives. So yeah, they need to determine the frequency of these reports. And as Tracy said, the monthly start with all organizations working on that basis. Yeah, but it has to provide critical information information that's chosen to go to TSC so they can basically measure the governance is working in the heart. And secondly, if it's an interest that's of value to be exact, if they were to pass on that report or if it was demanded of them. So yeah, I totally agree with what Tracy said. It's always continuous improvement start of point. We see how effective it is and based on lessons learned to improve that's the only way forward. So yeah, I mean certain important aspect is important. The dashboard is also how we can automate a much of it to provide metrics as Tracy said, is it's forthcoming at some point. So I think the two combined should get us to that level. Absolutely two things about performance and, I'd say, and usefulness as we go on. Any other thoughts on this? Chris, this is just, would it make sense to kind of split this into two pieces? A lot of the stuff I heard Tracy describing in there that's in the list is stuff that we could automate collection of, you know, write the scripts that are necessary. But if we set up a set of criteria that, you know, Perl enters comment about the dashboard, if you end up with something that shows red in your dashboard, then that triggers a deeper review or something for reporting. Would that make sense? I mean, I just, I, man, I hate filling out monthly status reports for anything that way. Yeah. When it feels like I'm clicking the exact same box every time. Right. I tend to agree very strongly with that. And that's why, you know, I focus on putting together some sort of a dashboard that, you know, is reflective of the various measures that I think are meaningful for someone to understand how things are going. And, you know, spending less time on pulling together and putting together a status report and being able to focus more on getting stuff done. And so I do tend to agree. And again, yeah, if, you know, somebody goes into the red, then, you know, then obviously, then we have to take a closer look and we have to understand why is it. And so forth, whereas if everything is green, well, you know, then there's maybe only need for a periodic update where, you know, we can have a congressional grilling. I'd love with your measures of accurate. You know, that sort of the point is that what we should maybe look at is in the context of Tracy's proposal. And I saw in chat a couple of questions about, you know, is the, you know, the number of maintainers, you know, added or whatever a good measure of health, or is it just a number, right. And diversity obviously is an important thing that we should be measuring both of contributors and maintainers and so forth. But, you know, again, all those things can probably be fairly easily measured automatically and captured in some sort of a dashboard. So if you want to make it, you know, private to the TSC, I suppose we could do that, but I think, you know, it tends to be that most of these things you actually want full transparency and not secrecy and so therefore, you know, I think we should be asking ourselves the tough questions of, so what do we think makes a healthy project and how do we measure it objectively as opposed to subjectively. Let's get started. So here's what I would propose then I would propose we take this to the mailing list and and we go through and we actually try and and capture you know what are the measures that we would be looking to ascertain the health and and the strength if you will the diversity of the project. Okay, Chris, I can, I can kick that off on the mailing list and look for responses. Since I've already have a few thoughts on that one. Good. Thank you. Okay. Anybody else. If not, then next up is Dave. Dr. No. Not Dr. No, not Dr. No. Professor No. All right, so I'm going to make this quick. Just a quick status report I would have sent out more detailed information although we're we're in the sort of the negotiation phases of statement of works and didn't want to make that public just yet. But we've received so to backup. As part of the 1.0 release. We engaged with several security auditing firms that are going to do a code review and pen testing for us just to get to establish a floor just kind of a just have somebody outside of the maintainers. Take a look at the code and do audits for security best practices and crypto material handling and network pen testing and things like that. We've talked to netitude, secure works and rapid seven. I have a really great statement of work proposal from netitude. And the numbers are reasonable, it seems secure works is due back they've said last night, but I haven't heard from them yet today or tomorrow. Hopefully I just emailed them this morning just to prod them a little bit asking for an update. Rapid seven is evaluating our code base as well or just, you know, trying to put together a statement of work for us. So all that's inbound soon. And anyway, once we have the numbers we're going to I was hoping to get something so that we could present it we could come up with a recommendation and get it to the governing board on Monday but I don't know that it's going to happen unless we choose to decide to go with netitude without looking at other bits but I think that's irresponsible for the project so I'm going to see if I can get something with secure works today and then I'll send it around to the people, you know, here at hyper ledger, make the decision. It's going they so far we've seen that there's going to be like a two to three week staffing lead time to get going and then the estimates for completion or anywhere between four and six weeks. So there is a significant time investment for this that's going to delay releases. We had a meeting about fabric 1.0 release yesterday. I'm going to talk about the security scans because IBM has done one against fabric as well and we're I'm still waiting to see the results on that. If that was done, you know, thorough enough and the maintainers like it and it looks like we were doing all the right things I think we could use that in lieu of the one that I've initiated and not hold up the fabric 1.0 release, but that remains to be seen we're going to have to look at those and I'll report back when we have seen them. But once it gets going, then I plan to convene the first meeting of the security bug triage group to start feeling the issues as they come in from the audits and working through our security bug handling process so we can get all of the kinks out of that. And to just get the communication channel set up and make sure that security bug flagging your works and all that stuff so that's that's the current status right now I'm pushing pretty hard to get statement of work out of these companies so that we can get moving on it. I don't anticipate. So Brian's asked me do you think that this should be something we should do annually. In my professional opinion. I would say no, because I would just like to establish a baseline and then if we do careful change management and we do the right things we have a good process in place. We should be able to maintain the integrity of the code base that we gain from this kind of a scan. But if we do significant rewrites, where we throw out large chunks of code and rewrite things from scratch maybe we should consider in the future I wouldn't completely write it off but. But yeah. That's pretty much all I've got. You need to say we do it for every major release. Yeah, I don't think there's a requirement for it but for an outside firm to do it. But yes, I think the CII badge requirements say that we need to do like an audit of the code for every major release and that could be an internal one to security team or things like that so. Yeah, that's current status any questions. Dave, are there any tools out there on the market, proprietary or sort of an almost that shareware. You know what I mean, that could automatically go through the core code and provide us you might say some kind of benchmark testing. I'm not sure I understood your question. Are you asking if we've used any tools to do. Yes, exactly. So that can go through the code and give you a report in terms of finding based on security or even security aspects like are there any backdoors are there any areas that we should look at closer. Yeah. So, yes, we have applied some tools there. We've been focusing mostly on the continuous integration modifications so doing fuzzing or doing static code analysis at this point, we don't have any dynamic code analysis in place. And that's what the security audit is going to do for us. But, yeah, I don't know that it'll pinpoint areas of code that we need to look closer at but just from a security best practices, you always want to look at your crypto code you always want to look at the code that handles data coming from untrusted sources. And you need to look at the code that establishes your quote unquote sandbox, you know, like what's your file IO library look like what is your, you know, your process management library look like things like that. So, so are you saying that just mentioned all the static code with you. No, I don't think that they, I don't think any static tool knows how to look at how you're using say like the salt crypto library, right, there's some best practices out there but I mean manually a static because you talked about static. I'll be manually looking at these categories. Um, so I've got a different version of the question, Dave. Yeah. So actually first. We were getting some questions from one of the vendors about lines of code so just out of curiosity is that part of the, the billing mechanism for them or the cost. Um, I don't believe, well, yeah, I mean, yes, but not directly. I don't think they bill per line of code. I think they are just trying to assess the volume of code so that they can give an estimate on cost and time. Um, because ultimately these parts kind of heavily into you know what was test code versus what was the mainline code and stuff. So, yeah, I don't know how precise to get with them. Um, just give them some rough numbers. I wouldn't give them because I think what they're trying to do is just to get an idea of how big it is. Plus they can drive some metrics like how much coverage of the code have we done based on our fuzzing. You know, you can see how many lines of code they've activated so they can say, oh no, we've covered 90% of the code with our fuzzing. But I don't, I don't think they need exact numbers. I think they just wanted to know what areas we wanted them to focus on and like how many lines of code that is. Okay, cool. And then, um, maybe you can help me understand a little bit difference between their objectives across. I guess there's a couple different ways to define dynamic testing versus pen testing or dynamic analysis versus pen testing is what I think of pen testing. I'm often thinking of like a live production site. And so all of our, our platform software could be deployed in a variety of conditions with, you know, all sorts of different firewall settings. And that kind of deployment scenario is going to be, you know, very specific to a certain customer is certain deployment model. Yeah, so when it comes to the pen testing stuff. I'm going to have to deliver to them or the teams are going to have to deliver to them sort of the recommended install and basically the default install. And that's what they will pen test from in terms of like network attacking, like, can we, can we mess with the with the software over the network, or, you know, locally with files and things like that. So that, I hope that answers that part of the question when it comes to dynamic analysis that is part pen testing right because it's mostly fuzzing I guess is what I'm getting out here so they will send random data or carefully guided random data. And see how deeply they can penetrate into the big executable. And if they can get it to get into bad states or do bad things that actually can be translated into a CI system. So working with Roha they because they use C plus plus, they have the ability to use American fuzzy lock, which is a machine learning guided fuzzing tool, which maximizes code coverage and does a guided random fuzzing. But it's guided by an AI driven by code coverage so they actually have quite an advantage by using something like C or C plus plus. Because their fuzzing is going to be a lot more thorough and a lot more. I don't get thorough, and guess is the right word. And I'm hoping to get that into the CI system so these security scans are going to do a fuzzing pass and I'm hoping to get from them. I'm just going to start that discussion once we picked one of them about getting their fuzzing harnesses set up as part of our CI, because a lot of these fuzzing things they'll they'll write some code to do the fuzzing. And I don't know if we'll get that from them but since it's an open source project I might be able to, you know, plea for help right can you open source the fuzzing code and commit it to the code base for us so that we can use it in our fuzzing in the future. Did I answer your question. Yeah, yeah that helps. And so when it comes to the the pen testing half of it. You don't just have to answer here we want to send out some guidance on. Or maybe we should have some discussion what what the expectations are for hosting environments for that pen testing or the project supposed to set aside some some sprint time to set up an example deployments. That's available for the pen testers or what your thoughts are there. Okay, so it just depends on the the company that we're working with I think netitude said that they could set some stuff up but they were wondering if we had assets already set up for them. So, the answer is either right whichever way works best. We're going to need to know how to set one up right and that would mean that we need to document sort of the default install. If they if we have them do it. Does that answer your question. So, I mean I would say yeah you probably put a little bit of time aside to at least support them if they're going to set something up. If you need resources to set something up then let me know we'll see if I can line something up for you. Okay, thanks. Yeah. One thing I just wanted to add. So, you know, we've established that a code scan like this, and it's been testing is a cherry on top is a requirement for projects that go to a one dot oh, right. And this is something that I believe it's important for us to fund from the Linux foundation side for two reasons one is make sure that we can provide kind of that independent audit. You know, separate from from, you know what what what kind of we expect many of you will be doing internally as well or perhaps even hiring your own teams to go off and do. And secondly, to make sure that, you know, all projects at hyper ledger get at least a baseline scan like this, you know, so it's not a question of some projects being better funded than others is something that we put a project out there hyper ledger anything one dot oh, people should expect it's had a code scan at the least right. We expect others to do their own scans ahead of time, and it may be that for a one one dot oh release or something dot oh release that the maintainers on that project. Actually, you know, feel pretty comfortable that the scans, you know, separate from these meet that requirement. So, you know, this will take time to do right for for the three projects that are heading to a one dot oh right now. And I think if those communities are comfortable with the scanning there, they're doing the maintainers on those projects and they vote. And, and they, you know, we've, we've responded we've we've closed the holes that those cans have found. We want to push a release. We wouldn't hold up a one dot oh release for what is likely here to still be a multi month process. Right. But we should still do it. We should still, you know, see if they turn up anything new that the previous can didn't and address them. But, but I just want to be clear. We don't want to introduce this getting started on this now, which we are as they hold up to, you know, one or more of our projects wanting to get to a one dot oh. Does that make sense. Yep. Or did I muddy the water too. No, I think I think you, I think you're spot on. Okay. No, that's good. I think you're talking about strategy, which sort of lines to the process we have in place. So no, that helps with the direction. Certainly the code scan is an important aspect based on our level of majority. As well as ensuring whatever we put out there. Otherwise, it's because of our overriding principle for any project is that it's going to be very scalable. So these two aspects must be. So thanks. Any others on the, on the TSE have thoughts about this. Okay, well, David's going to push continue to push forward on getting something like this signed and, and, and we'll, we'll, we'll get the budget for it. I just want to make it clear to folks that, you know, we, we know many of the teams are veriling to a one dot oh release want to support them in that. Yep. Thanks guys. Anything else. If not, I will see some of you in China. Enjoy. Oh, one thing I should add. I think the door to China, you should all give back a photo album of sorts and we can put together a collage. I think that's very good for the entire. Thanks guys. Have a good day. Thanks guys. Bye.