 So, welcome everyone to today's meeting of the technical steering committee. Hopefully you've all had time to digest the antitrust policy notice that you see in front of you today. As always, everyone is welcome here in the hyperledger community. If you have any comments or questions throughout today's meeting, please feel free to chime in with your thoughts and opinions. We have a fairly light agenda today. We have a few announcements that Solona will walk us through, and then really the couple of the main discussions will be around promotion of Indy to active status within hyperledger and then CICD proposal from Dave Hughesby. So I'll pass it over to Solona right now to give us some of our updates. I'm meeting myself would be good. So the contributor summit, we were talking with Hitachi about a venue, but evidently it fell through due to lack of Wi-Fi access. So I'm really needing help to locate other possible sponsors. I'm also dealing with budgeting issues. Somehow I thought that I had 300,000 to do all the boot camps in the contributor summit and evidently I only have 150. So I'm trying to figure that out right now. And so I'm having some talks with Brian. There might be some options with some of the other boot camps in regards to sponsorship, but the contributor summit in Tokyo is just really expensive. So we're going to have to figure that out. The quilt reboot is also happening in the office on April 4th. The quilt group decided as a whole to look at it as, you know, since we already have the great metaphor of quilt being made out of pieces, they want the pieces to all be protocols. And we're also going to be allowing for labs. And we're going to be talking more about how to handle the project structure for quilt on April 4th. It's actually a three-hour meeting that we're scheduling. I'm assuming people will pop in and out. But that's where I'm going to have it in person and then everyone else can call in via video. And then the last one is internship. We've got over 100 submissions. We're expecting, based off of previous years, to hit 500, I think is what Todd said. But right now it's mostly in China and India. They are talking about changing the name to mentorship, though, you know, I have to admit I'm a little worried about that because I know college students Google for paid internships, not paid mentorships. Because paid mentorships normally mean the mentors paid. So we're still talking about that. Questions? Yeah, Sir, can you maybe just give a little color on why the switch to mentorship is being considered? Dave, are you on the call? Yes, I am. So the explanation that was given to me was that that's the language that the Linux Foundation uses on community bridge, I guess, or something like that. I have to confirm it. But that was what I think I remember they said. Okay. Yeah, I think I would agree with Solana here that internship does, you know, as someone looking for something that's paid and that sort of stuff and to put on their resume, you know, certainly that sounds like something that would be more enticing than the mentorship to me. I think the reasoning they were using is because the internships maybe aren't as strictly structured as we would always like for them to be. Like I think right now some of the internships are maybe a little bit too hard for what the interns are actually going to accomplish in that time frame. And so it ends up being a bit more mentoring than it does strict internship. But I'm just worried about SEO and search engine optimization aspects of it. So that was my primary concern. I can add a little more color to this. I think it's because what the LS is building with Community Bridge appears to be something that's going to happen year round and is less like a summer internship program, but more of like a crowdsource. Hey, we have this opportunity. They cropped up. Somebody wants to mentor a student or, you know, some other person who's looking for that kind of guidance. So sign up and join us kind of thing. It didn't seem to me like it was. This happens every summer, like Google summer of code. So I think that may be what pushed them in that direction because if it was kind of year round, then it would make sense that these are kind of like ad hoc mentorships. I understood. Are there any other comments or questions on the announcements? My big one is if people could help me on the contributor summit in Japan for finding a location and sponsorship, that would be huge. And just contact me on chat. On that, do we have an expected maximum number of attendees? So I think previously it's been fairly well attended, especially when it's tied to something like the membership summit. Rai, can you speak to the numbers that the HackFests had previously when they were tied to one of those major events? Well, the only one that we really had that was tied to a major event was the one in the membership summit. The rest of them have been somewhat freestanding, but around 100 to 150. Okay, so we're looking for possibility, then we probably look for somewhere around 150. I was even compromising with 100 because I was having such a hard time, but 150 is definitely ideal. And of course with multiple conference rooms, right? At least for breakout into tables, you know, I like having it where they can both have a few conference rooms and breakout into tables. So when they need a conference room, they can grab one of those instead. But definitely being able to break out into the tables as well, I think works really well. You get more cross foot traffic that way. Plus, if I have any lead, I'll let you know. Thank you, I really appreciate it, Ben. So, Salona, on the contributor summit itself though, right? You know, size, you know, data side. What exactly, again, do we hope to achieve? We're not, I'm hoping this isn't, again, just sort of a melange of boot camp. I mean, it really should be about the various teams collaborating and working on their respective roadmaps and so forth, right? Right, so one of the things that I wanted to do on that is, have you ever done fishbowl exercises? Yes, but that's not exactly the kind of, so let me give you an example of, like, for instance, what OpenStack did. So OpenStack would basically hold what they called the ACT, I can't remember what it stands for, but it was basically the equivalent of the contributor summit. It's basically every one of the projects, and again, there were like five top level projects and a bunch of other smaller incubating things and so forth, but every one of them would have a room where they could be doing face-to-face discussions of, you know, their road map and actively discussing specific proposals for improvement and, you know, sort of building support and consensus around, you know, taking, you know, sort of the next, you know, a few months development forward, right? And there would also be parts of it that would focus on, you know, growing contributors and so forth, but basically each project got that to go off and work on it. And, you know, there are some people that sort of flitted from one project to the other, but primarily it was that. And then there was the sort of the track of the ACT, which is the equivalent of, like, the TSE, which then all of everybody came together and they talked more about the integration of, you know, X and Y and Z and so forth. And I would think that I would already hope that the contributor summit was something like that, that it didn't necessarily have 150 participants, but it would have, you know, certainly a lot of the contributors to the various projects, especially the maintainers of the various projects, collaborating on and working on, you know, building each project specific agenda and then having, you know, sort of another day of meetings that talk mostly about the intersection of various projects. And then you could have, you know, various breakouts that took off from there. But it's much more about the projects and less about, you know, the dog and pony and bringing noobs up to speed and so forth. It's really about people working on, here's our roadmap and here's, you know, discussion and debate about, you know, new features and how to attack them and so forth. So I like what Chris is saying. I think this is great because I think it would dovetail nicely with the boot camps, which are mostly bringing new people up to speed. Yeah, I think, I mean, I think the one thing I would say is in line with what Chris said is, you know, I can't imagine that we're going to see more than, you know, at least of the top of the, you know, five or six top level projects that we're going to see more than five to 10 really core, you know, contributors or maintainers from each project. So, you know, I think in mind with that, I would imagine that we would have, you know, perhaps lower than what we've seen in the past there. You know, maybe a couple action items to move this forward is, you know, one, perhaps we could open up some sort of page on the hyperledger confluence to sort of let contributors indicate whether they're likely to come and then also what topics they would like to see and discuss and maybe that way we can have a little bit more community driven assessment of, you know, what are the numbers going to be? And then what are the topics that the contributors would find most valuable? Right. And, but Michele, I think, you know, the kind of thing that I would like to see would be specific proposals about, for instance, you know, getting to a single API for submitting transactions, for instance, to a wasm, something like that. And then, you know, that kind of thing, and we could sit around and we could talk about it and then we could also then factor that into how would we integrate it into sawtooth and fabric and burrow and so forth, right? So, I mean, it can't just sort of be, you know, I don't think we want it to be an unconference again is what I'm saying. Yeah, I agree. Hey, what was Solona going to say? So, guys, Chris asked, and then he told me, what I'm looking at is two days, one day being something that's way more curated, talking about our architectural interoperability intersections, all that kind of stuff. The other day that is a curated unconference. As the people who attended the boot camp can tell you, it wasn't complete chaos. It was all about each individual projects. It was they all had their spaces. It's just that the focus was on onboarding new people. For the contributor summit, that would not be the focus. The focus would be a lot of those different topics that all of y'all have been talking about. And it would be curated in advance, just as we did with the Hong Kong boot camp, where people were submitting all the different stuff almost an entire month in advance. The problem is, is the reason I haven't put up a page or a section is because we do not have a date nor a locale at this point. And so if I go through and do that, it's going to be fairly chaotic and it's going to get really crafty really fast. If we throw something up now and you sit there and say, Oh, I want to talk about this piece of the SDK. And by the time we actually get around to having an event, if it's already gone. So that's the reason I have not thrown anything up yet is because of that. The question is, is that we do have to figure out having it in Japan and having a locale. And as I said, the budget is an issue. We have to find sponsorship. We have to the TSC has to work as a group in regards to helping me support this. Hey, Solona, on that note, have we reached out to the sort of premier board members? I know many of those are multinational, you know, sort of corporations. We've got Fujitsu, Hitachi, NEC, all that are are headquartered there in Japan. I'm just wondering if perhaps, you know, maybe folks on sort of maybe the more the business side and the governing board will have better opportunities to actually to book that space at their facilities versus perhaps the folks on the TSC. Right. The one that just fell through was Hitachi, basically because they wouldn't give us Wi-Fi access. So I have to circle around with I had reached out to Fujitsu, but I haven't reached out to NEC yet, but connections to each of those two boards would also be helpful. Okay, that would be great. That's great. And then I think, you know, at a minimum, we have the board members from NEC, but I can certainly, you know, go look at the Intel side and I'm sure others on the call can speak to their representatives. If Wi-Fi would be a problem, I mean, you know, maybe Hyperledger could afford a handful of Mi-FIs or something like that. Not a very expensive proposition, really. So even with, I think we'll still have those out pretty hard. Considering how much when people are actually working and trying to get those pieces done, they hit the Wi-Fi pretty hard. We even tapped out Cyberports and Cyberport had a massive one. Now, that's true because we were onboarding. And so there was a lot more downloading happening, but it's still, I can't really go for them. Yeah. All right. Any other thoughts on the announcement topics? So I have an observation here. It sounds to me that, you know, we moved from having Hackfest to this notion of having two types of events, the BluidCamps and the Contributor Summit. Now we find ourselves short of money to cover for all of this. And somehow it's the Contributor Summit that's in danger. And I wonder who said that priority? I mean, should we have less BluidCamps and have more Contributor Summit if we can't afford both? That was my mistake that I said earlier and that I thought that I had more budget than I realized earlier when I said that. And I said that I thought that I had a... Not only to blame you, I mean, even if it's a mistake, you know. No, but that's how it happened, Arnold. It was a priority setting of one versus the other. It was me screwing up. But so the money is allocated now, we cannot reallocate it. That's what you're saying? I'm saying I'm trying to figure that piece out right now. Okay. But regardless, paying for everything in Japan without sponsorship is going to be over $100,000, even if I didn't do any of the other bootcamps. So I approach the travel as, is it worth the time? And I realize, you know, I'm coming from the US, so it's different for me. And it's, you know, the opposite for people in the Far East. But for the two days of travel it takes to get to and from Japan, is it worth, you know, what's going to happen at a Contributor Summit that makes it worth and easy to justify the travel expense? I mean, it needs, you know, we need to do some concrete and effective stuff that we can't just handle over the phone or something if we're going to travel. So let's, you know, make sure we factor that into, I don't know how other people feel. So it sounds like next steps there is that we can certainly reach out to some of the the other Premier members. So perhaps NEC is an example. And then once we've got a confirmed location with Wi-Fi, then we can work on the exact agenda there. Okay. So moving on to the next discussion topics, this is related to a vote on moving Indy from the incubation to the active status within the Hyperledger framework. So fortunately, Nathan did a great job of getting the details of this out there a few days before the meeting. So hopefully everyone has had time to review this. I have seen that there has been some comments and traffic on this. So I think what I'd like to do is just sort of open it up for questions to see if there were any questions based off of the proposal. And it does, I believe we did have Nathan on the phone to respond to those if there are any of those questions. Did anyone have on the bridge have any thoughts either endorsements or questions related to this before going to a vote? Okay. And Ryan, Solona, maybe just to confirm, are we at quorum for a vote? The only person that's not here is Dan, so I'd say yes. All right. So I think let's go ahead and move to a vote. I would like to propose that we take a vote and propose that Indy move from incubation to active status. Does anyone second that? Second. Okay, great. All in favor of Indy moving from incubation to active status, please say aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. Aye. All those opposed, say nay. All right. Fantastic. Sounds like there is consensus there. Congratulations to the folks at Indy. Certainly one of them. This is in fall tolerating. You don't know that, do you? Also, did anyone abstain from the vote, right? You called for, you know, I is a no. Oh yes. Yes, yes. Are there any? Yep. Are there any abstains? Okay, great. Yeah, thanks for that, Roy. Sorry, what was that? I just said, thanks, everyone. Congrats. It's great. Yeah, absolutely. A well-deserved promotion to ActiveSatis there. All right, fantastic. So next up on the agenda for today is a discussion on test nuts and CI CD proposals from Dave. Dave, I'll pass it over to you. Dave, you may be on mute if you are talking. Good call. I was muted. So I sent out a proposal yesterday to the TSD about a good solution for our CI CD and test nuts. I have to give all the credit to Mike Lauder on this one. After a month of frustration with our existing system, he went ahead and found GitLab as a potential solution, and it turns out to solve not only their problems, but a lot of the hyperledger-wide problems that we've been having. Primarily, what it happens is it drastically reduces our financial and technical overhead and our human overhead. We can run a central server, hyperledger can, a coordinator, and we can run a couple builders or runners, is what they're called, runners, just to make sure that every project has a little bit of capacity to run CI CDs. But then teams themselves would have total control over who can join their project's cluster of machines, and it would allow them to run campaigns basically saying like, hey, we want to do a big push on a test net, you know, a big soak test or a scaling test or something, and then get their community to join virtual machines or spare machines or whatever to participate. I really, really like this solution mostly because it's self service, and the permissions set up in the GitLab matches sort of the roles that we have hammered out for people in our community, where we have maintainers, contributors or developers, and then people who are, you know, just using our software and have comments. Again, Mike Lauders, it was a great idea. He brought it to me, I don't know, about a month ago, and it looked like exactly what I was hoping to find when I was looking for a solution. So the proposal is up on the wiki, I sent it to the mailing list, and I'll answer questions. Sorry, just a quick question. Unfortunately, I missed where that proposal is. Can you maybe, where exactly on the wiki that is? Oh, I've got it open right now. I got it open. Okay, great, thanks. It's under the security section under the wiki, imagine that it's under software delivery, so it's two layers deep. Okay, great, thank you. So I just put it in the chat, Kelly. So, Dave, I have a question because, you know, we, we, we, I'm going to get pissed off here. So we currently have Garrett Jenkins. And while I appreciate that, you know, not everybody likes Jenkins and not everybody likes Garrett, and we also have GitHub. But I don't understand why we're talking about going to a whole new set of tooling, because, oh my frickin God, the work that's going to be needed to transition from one platform to another is ridiculous. And I just push back on you right now, because I don't think it's going to be that hard. Okay, I mean, seriously, when when do you think we're going to have time to do this? So on the indie side, we've already done some of this transition work. So we can, or first, we're not using Gitlab as a replacement for GitHub. And that simplifies things quite a bit, meaning, you know, these tools do work together, you don't have to wholesale move everything from one place to the other. The big benefit that we've gotten out of Gitlab is it makes it so the build system itself becomes self service by the developers rather than requiring anything from the administrator in terms of having to deal with the groovy build scripts, or in terms of having to manage the runner centrally from inside of the Jenkins server itself. So that's Yeah, but you can do that with pipelines too. And that's certainly that's the direction that we're heading in. Yeah, and so that's really the emphasis here is to be able to make it so the developers can self service more of that work. So instead of having to put all that on the maintainers, or the maintainers of the build part of the system, it ends up spread across the team. And that's really where we've seen the benefit here. Oh, yeah, Chris, maybe I wasn't clear, the Gitlab solution is, it's strictly going to be CI CD. And if you have a system that already works, which I know fabric does, we wouldn't require you to leave that right away, right? This is the transition. This is kind of like the confluence thing, right? It took us a year to move from the old to the new, it wouldn't be like suddenly you have all this new work you have to do. We're just trying to in a system that get like Nathan says gives all of the teams a lot more control and self service ability over their CI CD. I know they do now. But but we had teams try to do the Jenkins stuff that we have now and have failed. And I'll point to, not only they failed, but they just decided they didn't want to go through all the pain. I mean, saw me to sovereign. I even think bitwise has their own and Monax have all built their own, because the solution that we are offering them doesn't meet the requirements, and is very difficult to work with. That's my only real counter argument. I'm always open to have a discussion about, you know, maybe there's something else we missed. Is there a better idea? Now is the time for us to hash it out really. I just don't like, I don't want to come across here as like, you know, get on board or get out of the way, because that's not what's going on here. I'm just suggesting that the requirements that I've gathered from talking to all our teams could probably best be met by us standing up a single GitLab instance, and a couple runners, and letting the teams manage it themselves. One of the benefits to hyperledgers is it's practically reduced as our overhead. Dave, Dave, a couple of runners is not going to handle even fabric. It wouldn't even handle modules. Yeah, so it would be something that there are literally the teams would have to set up. Yeah, it's something that the teams themselves would have to set up, which is already what our teams have, right? All of the main companies behind our projects, the main vendors, have already set up their own CICD assets. So they already have machines in place. I don't know the situation. Oh, so now you're saying that the CICD should happen behind the scenes? No, no, no, no, no. We're only talking about just machines that are available right now. It's happening behind the scenes. I have no visibility into Sormitsu's CICD pipeline, and that's not Sormitsu's fault. I don't mean to pick on them. I have no visibility into Monax's CICD, right? I'm not picking on anybody here. I'm just stating that I don't get a view on any of that. I could be wrong. Maybe Silas would jump in here and call me wrong. But the point being is that the couple of runners is really just to make sure that there's a bare minimum of capacity, so smaller teams like Cello, which don't have any real assets or primary vendor behind them, would have some limited capability for CICD. And we can talk about how many machines that is. But the truth is this will drastically reduce hyperledges overhead, right? We're paying on the order of, I don't know the exact number, but it's like tens of thousands of dollars per month, per month, to run what we have already. And that's not our primary concern. Our primary concern, actually, is to get it all be self-service and to bring all of the teams under a system that's easier to use. But that's certainly a huge benefit to the organization as a whole. So again, Chris, I do want to have this conversation. It sounds to me like you and I should probably have a meeting. Awesome. We have another meeting that's open to everybody, specifically on this where we talk about potential ways forward. How can we mitigate the fact that we have so many of our teams having to build their own systems that are not public, that the broader community doesn't have a viewer. Dave, Dave, Dave, Dave, hold on, hold on. So, Sora Mitsu had something in place. They didn't build it someplace else. They had it already in place. They never transitioned. Indy, I think is a different case. And I know that they've been actively trying to do it. And I understand that they don't like it. That's fine. But the reality of it was is that, you know, when we set up Hyperledger in the first place, you know, we work with LFIT to get in place technology that LFIT supported. And that was Jenkins and Garrett, right? I mean, that's, that's the tooling that all tons of projects that Linux Foundation use. Now, you know, you can argue whether it's good or bad or indifferent. I understand that. But I just don't see when Fabric is going to have the bandwidth to go through and reconstitute all of its stuff on a platform that I'm not sure that we necessarily would prefer. I think if I had to guess, my guess is that the Fabric Architects would probably prefer that we use something like concourse. And because then you have full visibility into the pipeline, because we don't just have, you know, a build and a test, we have a full pipeline of stuff that goes on in Fabric. And it's important to be able to understand, you know, as it made it through the unit tests, as it made it through the system tests, as it made it through the integration tests, the performance tests, the chaos tests, right? Are those tests run in a Docker image? I'm sorry? Are those tests run in a Docker image? Everything that we have is containerized. But again, increasingly, we're going to more and more, you know, sort of sophisticated testing, and that's going to involve spinning up Kubernetes, you know, through Helm or various other approaches, and running a series of tests against that. Right now, where, you know, everything is Rai knows very well is, you know, done through, you know, virtual machines that we stand up with a specific image in it. And then we populate it with all of the containers. And we're basically just running, I don't know, like swarm or something under the covers. But So GitLab knows how to drive Kubernetes. And your test pipeline could just be here's our cube control. Where are we getting the cube clusters from? Sorry, what? Where are we getting the cube clusters from? Well, so the machines themselves are crowdsourced from the projects, right? And this is an opportunity for other companies to sponsor machines for us. We can also get them from CNCF, you know, they still have that platform, that program where they're handing out free resources to open source teams. I mean, the the SOARMeToo group, the Roja team just got a bunch of machines from CNCF to set up a CI-C pipeline on. The rest of it would just be the rest of it's just the coordinator, right? And that would be done by GitLab. Chris, I'm not telling you you can't continue using care for Jenkins for the time being. What I'm concerned with is if, and you know, I want to take what you're saying to heart and to consider it, but how do you turn to the rest of this hyperledger community and tell them that they have to stay with the system for months of just building their own systems outside of hyperledger because they couldn't make it work for them? So could I come in now and just offer a few perspectives on this? This is Silas from Monax. So yeah, we use GitLab. So there is an element of bias there. We did set it up after the borough project had come in. And part of that is reasons that maybe are specific to Monax and part of it was the feature set that GitLab offers. I think, so I responded on the conference.net and Morgan, who was raising some of the same issues about why system N plus 1 being added, kind of like Chris is here, which I think is a good point. It's been a while since I used Jenkins. I did use it quite a bit. I get the issues with migration. So I'll leave that. But in terms of what GitLab, I think does do well. I think there's two areas where it's pretty good. So one is for a kind of GitOps focus thing. So the integration with pull requests and pushes. I've generally found it less flaky than when I was using Jenkins and it integrates better there. The GitLab CI file is kind of more of first-class citizen than the Jenkins file ever was the last time I used it. So you kind of get a very self-contained build with Jenkins. My experience was, and this links to the admin intervention, you end up with a lot of plugins. There's a lot of ambient global level configuration. So you kind of need to care about what instance you're connecting to. Now, maybe I'm out of date on that, but that was my impression and kind of what to let us towards building what we've got now. The other thing is on the executor. So pooling in runners that can have tagged with their own capabilities works nicely. There's built-in support for Kubernetes clusters. So we deploy some environments like a staging and a stress test automatically. We have some others that are still flagged a particular commit, but they're a manual trigger. And then we have some web hooks that can do various things there. So that side of it works pretty well. Two points. So one thing I get that there's not unlimited resources for this. I don't know where the shared runners would be running. Are they in a Kubernetes cluster themselves? Are they just machines? This would be a lot more useful for us. We could port what we have currently running on our own GitLab if there was a Kubernetes cluster also with this. So if we had a couple of shared runners in there, but we could also have a long-running Kubernetes cluster. We don't want to have to set up a cluster each time we run. So that is, if not a fatal thought in the current proposal. Kind of an issue for us moving over to it. The other thing, and this relates to what I was saying, I think it's done very nicely. And just this felt somewhat smoother than Jenkins is the fact the GitHub stuff is based around using their Git repos. So I mean, we're not proposing we move the main repos from GitHub, but I guess I think to get the best out of it, we're going to end up doing things like pushing particular commits that relate to CI pushes. And that could be made to work. I wonder how much of what makes this kind of cohesive package that GitLab is good. We're going to be throwing away if we aren't using the Git hosting. But my big issue would be with the Kubernetes clusters. The problem with the Kubernetes clusters is that it's very hard to make them self-service. And it's very hard to administer them in terms of like this team gets, you know, X amount of resources and this other team gets X amount of resources and kind of policing that because with Kubernetes auto scaling features and stuff, it's really easy to run up a $50,000, you know, infrastructure bill. Like I could just see somebody going, huh, I wonder what happens when I spend up a thousand nodes with, you know, pick one of our platforms. And then we get a huge bill from Amazon. And that's really what I want to avoid. I want the team to have a go. Plus our back. The quatering is fairly coarse, but I think you could achieve that with namespaces. Okay, great. I mean, I would love to work with the NSF. We can add to this proposal that capability. I'm not against making the shared runners at Kubernetes cluster. I think that would be rude, actually. But again, I want to make sure that we're trying to keep a strong cap on the amount of human time and amount of money we spend on anything that Hyperledger runs. And I ultimately want this to be self-service. I don't want to have to answer a bunch of emails saying, hey, I need more of this, more of that, right? That's definitely a good goal. So I mean, I don't think we're going to resolve this today, Dave, but I think, you know, we need to get in from all of the various project maintainers. This isn't just a TSC decision. Because, you know, frankly, I'm hearing, you know, how we have all these budget limitations and so forth, and you're basically saying that we're going to be paying for yet another CICD system on top of the one we already have, which is really doubling the bill. Well, it is increasing the bill, and we haven't even talked yet about where these things are going to run. And you're sort of expecting that everybody's going to contribute stuff. Well, I don't know. I mean, maybe, but that this, like you said, that stuff costs money, right? We're paying for, I don't know what it is, a digital ocean or whatever the hell, what is it, Rye? It's a VEX host. VEX host, thank you. You know, so we're already paying for a cloud, and now we're going to pay for what another one. I'm just... Chris, the implication is obvious that we're going to be scaling back VEX host, right? That is the plan. We will be turning it off. But again, basically, you're saying that we have to transition, and I just don't see where we get the resource to pull that off without dropping huge, you know, swaths of feature development on the floor. Yeah, I mean, that's a fair criticism, and obviously, like you said, we should have the conversation then. The fabric project is huge by comparison to something like Cello or Explorer. So, yeah, you know, it would be fairly straightforward, but holy crap, we've got more tests than you can imagine, and moving all of that over is not going to be some, you know, oh, just wave our magic wand at it problem. I thought we missed a number of people that are asking about motivation over in chat, so I don't think you've probably seen that since you're on a phone. Yeah, so I'm sorry. The motivation is that the old, there's like one team, the fabric team, that has successfully used the existing CIACD platform, and we have four others, maybe more, I don't know exactly, but you know, basically everybody else who have made a run at CIACD, looked at what we have, tried to Jenkins, you know, do JJBs and make a Jenkins-based setup, failed to some extent, or got into enough roadblocks that it was just easier for them to set up their own on their own systems. So from my perspective, my mission is to make sure that our software delivery is solid. Okay, I don't have you into the CIACD platforms that everybody's setting up. I mean, the best example would be what Sovere did, right? They stood up to GitLab, and it's publicly accessible, and I have an account on there, and I can go in and see they're built. But, and the other motivation being, we're paying ridiculous amounts of money to our existing CIACD system, and it's not self-service. Any time there needs to be a change to it or whatever, or something's wrong, not a plug-in in or whatever, it comes through us. Now I realize that's our job, to at least field some of that, but there's, it seems like it's a very difficult to manage, very expensive, not very user-friendly, new projects coming in struggle with it. I mean, there's just lots of problems with our existing system. Now, I have gone out on a limb and proposed we use GitLab. That may not be the ideal solution. As Chris has pointed out, there's a lot of downsides to it, so I want to have this conversation, and I'm just making a proposal to get that conversation going. I just want us to all recognize that our existing system is far from ideal, and that there's a lot of problems, and it's time that we address it and try to find a solution. I've been looking at this problem for months, trying to find something that works, and I haven't found anything until I came across GitLab and Mike suggestion, and it was like, holy cow, you know, we can crowdsource a lot of this stuff. Going back to the, I mean, thinking about what a commonality exists between, say, what Chris and Fabric are needing, and what I've just said that we would really need almost more would be a persistent Kubernetes cluster. Now, is that something that everyone could use? I mean, the way that I would get RCI into that, if I didn't have, I didn't need a GitLab set up outside of Kubernetes. We have all the Helmscripts, all of that stuff I could list directly into shared Kubernetes, shared to Kubernetes cluster straight away. Equally, you ought to be able to run Jenkins in it. I mean, or even if you don't, you can use that cluster rather than having to manually up to Jenkins. That seems like that would have immediate utility. I could run GitLab in the cluster, like we do already, but just make it publicly accessible for the builds and all that stuff that Sovereign has done. So, Silas, if we do a Kubernetes cluster, it is either or. Kubernetes clusters are so expensive to provide the level of service we would need to to even get to where you're talking. We would have to turn Vexos off, like tomorrow, to spin up the Kubernetes clusters. So, what I'm proposing is we keep our existing Vexos stuff, you know, the existing CICD platform. We stand up a fairly small instance of GitLab and a couple small runners and then allow the teams to start using that and crowdsourcing the resources. I mean, if you use the Docker runners on GitLab, people with spare computers at home can contribute resources. There's no reason that these have to be servers anywhere. Like, yesterday, just to run through this, to verify that this isn't a crazy idea, I had a spare machine under my desk and, you know, running your booth through and in less than an hour, I had it making builds versus as part of Sovereign cluster. It's very straightforward. And the only reason it took an hour was because I had to go through the mic to make sure that I had proper account access. So, I mean, once you get the join token, it takes less than five minutes to stand up, stand up a runner, and contribute some computing resources to a cluster. So, we can do Vexos and GitLab. Because GitLab, with a persistent GitLab server and a couple of runners, is going to be less than a thousand dollars a month, probably less than five hundred dollars a month, if we do it on, say, like Amazon or DigitalOcean or something. And then the rest of the resources would come from the teams and we'd have a way to move forward for the teams that don't have a way to move forward today. But if you need a Kubernetes cluster tomorrow, we're going to have to seriously talk about where the resources go, because we can't do both the Kubernetes cluster and Vexos. Does that make sense? That's why we're not talking about Kubernetes clusters, because originally that was the direction we were going to go in. Let's just have an EKS on Amazon or whatever, but for timing for timing sensitive tests like random resources, like out on someone's desk won't work. I mean, they might work for localized tests. But no, I mean, I understand them, I understand the position. Yeah, I appreciate the feedback everybody. One of the other motivators was also the test sets for the cloud interoperability. Yeah, we're asking about motivators. That's another one. So aside from the an initial conversion, if we decide to go forward with this for projects, how much of this is pushing work back, pushing more work back into projects? In other words, how much of the CI CD does Hyperledger run now for a small project versus what will be required if we go this way? Well, that's the thing. We don't. We don't actually run any CI CD for small projects, because they haven't set it up. They haven't been able to. No, no, no, but even for fabric, the Linux Foundation is not running our the fabric CI CD. What they do is they are running Jenkins and Garrett and Nexus and whatever else. Those are, you know, the tools that the LFIT are providing. And when they need configuration changes, just like when we were asking for configuration changes with JIRA and so forth, you know, the LFIT have to make them because they've locked down all the administration of those particular tools, right? But the actual CI CD is controlled and managed by the fabric team, at least in terms of what fabric is doing, not the LFIT, although the LFIT does help us in configuring, you know, like you said, the missing plugins and making sure that the base images and the VMs have all the right bits and pieces in them, right? But but that's that's not running CI. CI is run by the fabric team. We decide what tests are run, how they're run and so forth. And the only changes that have to be made by the LFIT are when we have to change the configuration of Jenkins itself. Now, we did start with JJBs and I think that was a mistake. We're transitioning the Jenkins pipeline, which means that basically the JJBs just say run this pipeline and then the pipeline is scripted by the development team. That's how it should be, right? And that's basically what you're saying, you know, we're going to try and do with GitLab and so forth and be the same as with Travis, for instance, right? Really not much different. But my point is this is that there's still a humongous amount of already built stuff and moving that over to a completely different build environment is going to be an enormous undertaking. And I don't see when that would ever happen. I mean, LFIT is actually looking at changing some of their different access and processes for upgrades. And so some of that is going to be changing regardless as they go through that process. Because as we all know, the system is pretty old and it's been around for quite some time. And right now they are trying to figure out what their transition points are going to be. In fact, I believe, Rye, am I correct that that's on the roadmap for June? I don't know the date. OK, so Chris, where did the computing resources come for running all of the fabrics? Ted, next host. Right. So the entire community is subsidizing the running of servers that really only fabric is using. Yeah, I mean, I think at the end of the day, this is ultimately a budget question, right? I think if this wasn't an either or then it would say, you know, great, let's let's have the N plus one tools that services the other projects and doesn't place any additional burden on the fabric project. You know, I think, you know, clearly, though, the part of this motivation was a sort of equality, right? So, you know, clearly fabric has their their CICD funded by the Linux Foundation, whereas other projects, you know, are unable to use that system and are having to put the bill themselves. So, you know, I think ultimately what what we need to do is we need to go talk to the maintainers and get their feedback. It's it's it sounds like both borough and indie would benefit from this. I'm not sure if we have an understanding of exactly how this would help some of the other projects. So it seems like maybe a good first step is, you know, sort of gauge community sentiment among those. And then and then at that point, I think it's really sort of an allocation of resources. I do think I, you know, I do agree with Chris that placing that burden of saying, Hey, we're going to switch a new tool because, you know, even though, you know, say fabrics been using Jenkins for this long, you know, this new project X doesn't like that tool. And so now that like fabric has to switch, it doesn't seem like the right direction either. Right. But I but I agree that we have to figure out how to support the entire community. Yeah. And I what I'm taking away from this is that we shouldn't take this lightly. And I need to spend a lot more time fully understanding the fabric pipeline and trying to figure out if there is what the trend will not is there an easy way, but it's not going to be easy. What the transition would look like from what fabric has to a GitLab based system. And then we'll have some discussion around like maybe potential other solutions. Like, I'm all ears. If there's something better than GitLab, let's talk about it. I mean, we first looked at the EKS cluster, the Kubernetes cluster in Amazon, right, or any clock to fire for that matter. And the main reason that wouldn't fly was the amount of cost and the amount of administrative overhead. Now, the yeah, I basically need to do my homework, right. I came in this a little bit at ignorant about fabrics, I guess. And the the scope of what it would take to transition them over to this system. And thanks, Chris, for, you know, bringing that up, pushing back. I really appreciate it. That's why we have these discussions. So yeah, I've got homework to do. This isn't going to be final. This isn't like I'm calling for a vote. We're going to do it tomorrow. But we need to we know that where we're at now doesn't work. And we need to figure out collectively, I guess, which direction we're going to go in. So I'm just offering the GitLab is a funny position and we'll go from there. Dave, could you could you also add to the homework to take a look at like whether it would be financially feasible to be running a low powered Kubernetes cluster. Now, if we need to have companies like donating node pools, that's something that's doable. But we don't spend a huge amount of compute resource running ours. I mean, I know that's like just for one project, but we run quite a lot of stuff on it. I know for me, I think that would be probably more useful in GitLab. I also posted in the TSC room, you know, GitLab has a top tier is free for open source. So, you know, we could potentially use public GitLab for that coordination layer. And then maybe even comes with runners, I'm not sure. Yeah, I didn't ask them if we could host on their stuff. But I was just planning on running it on our infrastructure. And since this was limited in scope to just CICD, you know, Community Edition has full set of features. I mean, it integrates with our Active Directory LDAP stuff. So we can use our LFIDs to log in already. And it has the CICD stuff that we're looking for. So I really wasn't planning on using any of the other GitLab features. It has really great support for external projects. So you just give it the URL to where the Git repo is. And it does a really good job of keeping an eye on that Git repo and running jobs. From that, they also have a chat integration. So there's potential we could hook it into rocket chat so that we can drive the CICD for rocket chat and get responses back there. It also integrates very well with JIRA, which then by proxy integrates with our console. So we could have CICD. So we're at the top of the hour. I think maybe a couple action items just to wrap up that I think would be good is one for various representatives of the various Hyperledger projects. If you have someone on your team that is sort of the leading the CICD activities, you could have them get in touch with Dave. So it sounds like, you know, Silas, you have some, you know, for borough, you've got some distinct requirements that would be useful for you. Obviously fabric has has their needs and there will be others. I think one action item would be to to connect with Dave and so that he can start collecting that information. And then I think the the other pieces, you know, maybe maybe moving further as a backlog item, Solona, we could have a discussion at some point in the TSC on sort of where we could use budget allocation on the technical resourcing side, whether those be, you know, things like the contributor summit or CICD. I think, you know, we need to have a clear and coherent position if something's going to be brought to the board around where the TSC feels that there could be additional funding. So what you're saying is we're launching the CICD working group today. So after time here, I've got a drop. I've got another call. OK, all right. Yeah, thanks, everyone. Appreciate it. And we'll we'll talk next week. All right. Thank you.