 All right, welcome everybody is October 5th and this is the hyper ledger technical oversight committee call. Everyone is aware of the 2 things that we must abide by the anti trust policy notice that is currently displayed on the screen. And the code of conduct that is linked in our agenda. So, for announcements today. We have the standard, definitely developer newsletter that goes out each Friday. If you do have anything that you would like to include in that newsletter. Please do leave a comment on the wiki page that is linked in the agenda. The second announcement is a workshop that is scheduled for October 12th called how to create a currency management application and deploy it on a hyper ledger fabric network. So, if you are interested in attending that, please do register for that workshop on the link in the agenda. Any other announcements that anybody has or would like to make today. So, For quarterly reports, we do have the basic report that is out there. I think we're, we're starting to get a number of people reviewed that, but are there any questions or comments on the basic report. No, okay. For the caliper report, I did put a message on the issue that we create when we let people know that the reports are due for caliper and I haven't heard anything back or anything back on that particular issue. Are we Thanks, Tracy. I know there is an active email thread going on, which David has started with performance working group. Just curious if he has heard of anything from caliper current maintainers. That was for David, right? Right. David Boswell, have you heard anything from the caliper maintainers in your ongoing emails with the performance and security working group and the caliper maintainers? I did actually ping them yesterday. I haven't heard back from that yet, but a couple months ago we had had some back and forth and there was interest in doing some stuff early next year. It sounds like people are going to be busy for the rest of the year this year, but they were saying they wanted to, there was talk of doing a meet up or excuse me, a workshop early next year. So I had recently heard from them, but not, not within the last month or two. Gotcha. Thank you. All right. So we will continue to try and reach out to caliper on different channels. Aruna, I don't know if there's a particular channel that you think would work well, but maybe you could reach out and see if you can get in touch with the existing caliper maintainers. Tracy. All right. For upcoming reports next week, we have the CACDAI and the fabric reports that are due. So we'll look to see those coming in next week. All right. So then for discussion items today, we have two discussion items. The first one is to take a look at the TOC election schedule. It is a requirement that the TOC approve these timelines as they come in. And so this is the first topic that we have. And then the second topic that we have will be a task force discussion. So with that, this timeline should look fairly familiar for folks on this call. Obviously, you all ran last year and I went through this process. So I think the dates are just updated to reflect, obviously, the new year, as well as some learnings that we had last year around timing and the amount of time that's necessary. So I think the first change is, instead of giving an entire month for the TOC nominees to nominate themselves, we're just giving two weeks. It seemed like what ended up happening last year is that people waited until the end of the month before they filed. And so we had talked about last year just shrinking this. And so we did do that for the dates. So we will be starting the timeline October 16th and running that through the end of October. And then for the next section, we have the election process that will run from November 1st through November 14th. So we'll again use the Helios voting to do that particular process. And the last thing that we have is the appointment process by the governing board. So if you recall, we have six people who are elected by the maintainers of the different hyperledger projects. And then we have the remaining five that are elected by the governing board. And so their timeline is November 15th through December 4th. I think we gave them slightly longer just because of the holidays that fall in there in the United States. And then, yeah, the last thing is after we get the 11 TOC members will vote for chair and vice chair. And that runs from December 4th through December 15th. Anything else that you would like to add to that, since you were the one to put this together? No comment. Okay. Any questions from the TOC members? Yeah. Okay. Um, so, Ryan, do we need to vote for this? Is that, is that the right steps then? Sure. Yeah, I think, yeah, I think we do have to approve it. So, um, If someone would like to propose the motion. Right. And I think that was Dave, was it? I think so. I went off mute to do it, but somebody beat me. No, I said, that was Sean saying, can we get a second? Uh, I second. All right. Okay. And the matter for the TOC, which is the election timeline. Stephen Kern, how do you vote? Approved. Okay. Peter. Yes. Marcus. Approved. Yes. Bobby. Approved. Yes. Approved. No. Tracy. Yes, I approved. And I think, uh, Ryan, you were breaking up. But I think you did say, oh, no, yes. Yes. Uh, didn't hear. So yes. Okay. Uh, the matter. Before the TOC is approved with, uh, two answers. All right. Thanks for everybody. So we will, uh, start the process then. And just under two weeks. Um, so we could have, I think it was. Okay. And then the second item on our agenda is the task force proposal for security artifacts signing. Uh, so I think Arun, we will hand this off to you. Arun, would you like to drive? Arun, you're on mute. Um, can you guys can hear me? We can hear you now. Yeah. Okay. Um, thanks Tracy. So, um, this is everyone. This is the second time we are meeting on this topic about security artifact signing, artifact signing. Now, um, I know in the previous discussion, we went through understanding some principles behind which the six store implementation is done. And we also saw a sample response of what it's produced and some of the issues that we need to focus on. And, um, in today's discussion, it would be nice if we can expand into those items and see if you have any ideas on that. But before getting started, maybe we can have a refresher of what we discussed last time that way it's been a while. We'll quickly refresh and then it started on that topic. If I may share the screen. You've got it. Right. Um, right. So, uh, quick refresher or the summary of what we discussed in a previous call is that. Why do we, why do we need signing for artifacts and then what kind of artifacts can be signed through six store. And, um, the different components that the six store itself has, for instance, there is this component called full show, which allows us, which kind of acts as a CA for us to issue a short-lived certificate. Now there were discussions on why do we need short-lived certificate. The reason being, um, we want to generate a signing key, which we don't want to maintain for a longer period. However, we want to establish credibility to that signing key. Um, the one way to do that is, for instance, since we are all familiar with the blockchain technology, we, one of the way we could have done that is generate a short-lived key and the public key information required for proving the signature could have been put on block chains, right? And this is very similar concept. There is like certificate generated, which has a specific validity in that validity information is captured. And this is this component called cosine. Um, this component is a CLI component. It allows us to include these, um, commands needed for signature verification or maybe producing the signature itself in our GitHub action CI pipelines. The other component is record. Like it's, it's, let's say this is like equivalent to a new blockchain ledger, but it's not really a blockchain. It's a publicly visible, um, transaction log or auditable information, which is, again, there were debates on who is currently maintaining this on the public instance. I believe we all discussed at least in the initial phase. It's okay for us to go with what we have available through a public installation. Now, the other component that you discussed is about policy controller. This is in with respect to those people who want to verify the authenticity of this binaries that we are producing artifacts that we have produced. And, um, this particular component is, is available for Kubernetes, but for those who are non-cubinitis or who would like to verify, they can always utilize cosine to verify the signature on the artifacts. And we also briefly discussed about advantages and then in terms of authentication, we discussed that there is OIDC support, which allows us to have organization wide, for instance, a type ledger foundation that can be an administrator or the community architects maintaining an identity that can be shared across to different repositories, whichever is producing artifact, which can eventually be used in the silent processes. The other things or challenges that we discussed were in terms of some of these aspects that we discussed is about when a signature is produced, where is it kept and what's the responsibility of maintenance and what information do we need to store and how is that final artifact getting stored. And let's say for instance, we generate, at least what I tested from my personal experiences on the container image production. And when we signed that, it used to, it worked well in the sense like we did not have to explicitly remember anything or store additional information related to this process. But when I tested that for blob file, which eventually would be some binary that we produce, in those cases the generated artifact had to be stored somewhere for me to go ahead and verify it later. So that's one open question. And personally, I don't know answer for it if we need any special tooling or things to be taken care in order to use that for blobs. But if you have any information that will help. So that's the gist of what we discussed. And yeah, any thoughts from anybody before we dig deep into some of these aspects? Arun, what is, can you tell me why, what it means by short-lived certificate? What's the implication of that? What's short-lived mean? Right. So, and, and Hart or others feel, I know you're involved, feel free to pitch in. So my understanding of short-lived certificate and the reasoning behind that is, let's say we would like to sign our artifact. The challenges that we face is in terms of how do we store our signing key? And what if that signing key is compromised? And let's say what if it gets refreshed with the new signing key? How do we keep the trail of our record of saying that here is the previous key since this artifact was produced maybe in 2022, use this public key for verification. And in here, the idea is we generate a signing key and then the public key associated with that is put inside a certificate which is generated by a full show. And the reason for making it short-lived is so that we don't need to store the key anywhere. It can live in runtime memory when we run the CI pipelines of the jobs. The certificate itself is stored in record so that the tool, the cosine or the policy controller would pull in this information from record. And it would understand which certificate to pull in order to verify the signature. So that's my understanding, the reasoning behind why it is short-lived and it is also to safeguard that way we don't end up having a key which has longer validity than what we require for signing process. So it's a one-time use key, essentially. You generate a certificate, you hand the certificate and you use it once to sign thing and then it's done. Okay. I believe there is option for us to control how long the key can be valid, like how long the certificate can be valid. Default time being probably 10 minutes. Yeah. Yeah, there's a lot of steps there that I don't understand how they would work, but hopefully I'll learn more as you go. Well, like what? I mean... Well, like, okay, so I've signed a binary. I get a binary, I want to authenticate it somehow I have to resolve it. So I mean, it's all the same things I think of when I go to identity and those. I've got to resolve some identifier associated with the binary to find the key associated with it. Then I have to get the public key to verify that the signature matches. I've got to know where the signature is and then I've got to figure out why that signature is bound to the issuer of the binary. What's the binding and how do I figure that out? I don't see how all those pieces fit together in this. I'm assuming they do. I just don't see how they do. Yeah. I mean, you have to provide the credentials along with the artifacts so that people can check that this is actually working. The only thing really is this ephemeral keys that you use to sign that frees you from having to store the private key forever. Here instead, they replace this by putting this transparency log against which you can check that this was signed. And by the identity, you know, you can check that this is actually working. The only thing really is this ephemeral keys that you use to sign that frees you from having to store the private key forever. That is given to you. And they had the control, whoever did that had the control of that identity at that time. Yeah. And they don't need the key to show that because it's the log. And the log, as Arun alluded to, it's really a ledger. The thing is not decentralized. That's why it's not a blockchain, right? Right. But it uses the same kind of mechanism internally to make it temper resistant. I know how it has raised this hand. So, Steven, adding on to the explanation, right? So in my personal experience or experiment that I did, I was able to get that required information for verification with GHCR. Like I did not need anything other than an identity that I used for signing. And most of the other information it was able to pull in and fetch it. Okay. But for the blob, that's where I had to explicitly remember and store that information or at least remember what information to pull in from record. So record had all the information needed for verification. I just needed to point to that record in record. Art. Yeah. I was just going to say, Steven, you're exactly right. You're not going to have an identity system built on a ledger. And so you can go to town criticizing it as much as you want, because it is not as advanced to say an identity system on a ledger that you are building. But it is sort of what we have right now. And I will say, like, if you have points on the architecture, feel free to join the six store group. Yeah, it's not. It's not necessarily that I'm criticizing or that I just don't see how the points all stick together. It's because I, and it's probably because that is my background is your, your, your complaints are totally valid. You're in our nose too. I mean, they just haven't had the engineering work put into this and sort of, you know, people are happy with for now with, you know, sort of a centralized ledger. For this. Yeah. Yeah. Yeah. Yeah. Yeah. I mean, the, the short, the, as Arno and Arruin explained, the short lives for the guitar are very desirable. And I think they did a good job of explaining that. Yeah. But yes, if you want to, if you want to build the next implementation of six store, you should get in touch with them. Yeah. But I, you know, I think it's worth playing around a little bit cosign. It's very easy. And you'll get a feeling of what it's like. I mean, the fact is you can sign anything you want. You can have this thing called signing a blob, which can be anything, but, you know, because then the question becomes, okay, how do you find that credential information to check? Right. Exactly. And I'm thinking, I'm going through and thinking, yeah, hash it. And now you've got an identifier. Yeah. Exactly. But so, so depending on the type of artifacts, you know, the method varies. So if you have an image, a container image, right, you can associate metadata against the, along with the, the image with the registry, they use that. Yeah. Yeah. And so like I said, if it's just a blob of data that you put on some, like a GitHub release, you know, a feature, the, then it's, it's just another attachment next to the binary you release. Yeah. Yeah. And I think that's part of what I was thinking of was for each of these things, there's got to be some method of the verifier verifier, knowing how to, how to find the signature, and then how to find the key associated with it. And, and it can be beside it. It could be in it. So those are the different things I'm thinking of. It could be that you, you take a binary. If you, if you're finding a binary somewhere in the internet, you could hash it and therefore find a unique identifier for it. So there's a, you know, there's a variety of ways. There's a variety of ways to figure out, for example, does each type of artifact have a standard way of saying, here's where the, here's where the verifier would find the, you know, the, the signature for this, the, yeah, the signature, I guess. And then here's how they would take that signature, trace it back to the, the public key that signed it. And, and therefore, and there, and then back to the identity that did it. So the, the way the identity tracing back is it's done through a CA. And, but it's just that link between the artifact itself, it's key and then, and then the key that, and then the public key that is used to verify it. Yeah. And the other thing is when you use cosine to sign, it actually prompts you, like if the default is to use GitHub ID, and it will actually connect to, because it uses open ID connect, right? So if you make the experiments, it's, it's kind of interesting. You'll see your browser will prompt you for your login if you're not logged in already. So they do check at that time that you're in control of that identity. And that's what that's the identity that's attached to the signature that's in the certificate attached to the artifacts. And like I said, depending on the type of artifacts, you know, the method to find the credential varies. For containers, it is standard because it depends on the registry. And so they've defined a way to do that and they really use it the same way. If it's just a blob, well, then it depends on you. The URL I'm looking for is six store slash cosine. That's GitHub in GitHub, six store cosine. Yeah, you can go to six store.dev. Okay. Cool. Thanks. Thanks so much, Steven. So I think that's one of the bigger question that we will need to find answer to. I like the idea of utilizing GitHub releases and attaching additional files for verification purpose. And maybe in the notes or maybe documentation, we can say, here's how somebody can go and verify. And these are the verification information that are attached alongside the release. This may work still for the artifacts that we produce and within the GitHub releases, but I'm not sure. Let's say, for instance, a fabric releases a Java SDK and that gets stored in May 1 repository. How do we produce a verification information for that? So having said that, like, six store does have a plugin for May 1 for verification purposes. Haven't experimented it, but I'm not sure if I'm audible. Now you are. Okay. So yeah. The other thought process I had in order to overcome this was at the organization wide, since we also discussed that community architects in a way would maintain an identity required for all the pipelines. And since we also have standardized, like how the pipelines are run, mostly using GitHub actions, we could have an organization wide policy for somebody to verify artifacts. Any thoughts from others? Or any project who currently follow any other process for signing artifacts? Any thoughts from you? I'm not sure how to take the silence. So I'll assume, yeah. I'm in agreement with you. And I, you know, this is, I don't think any projects are doing any artifacts signing at this point. This is something we need to be doing. You know, as Steven points out, you know, this is, you know, not the most, it's not the most optimized solution. But, you know, hopefully it is easy. And it's, it's what we're looking for here is painless and simple for projects to set up. Yeah, I mean, I guess I think everybody's probably in agreement. The question is, how do we get there? You know, what is the, what are the steps that need to be taken? Is this something that, you know, you're asking the community architects do? Is this something that you're, you're planning to put together a sample GitHub action for people? I guess I just don't know what the next steps are, if you would. Right. So I would thank Tracy for calling it out. I think that makes logical sense now that we are kind of in agreement that we are okay to start using six store now we need to figure out in terms of how do we adopt this across all our projects and how do we simplify the effort and figure out if any gaps are indeed present for our adoption, right? Here is what we can do. And then feel free to comment or time chime in. Maybe we could pick up action items in terms of setting up a sample GitHub action workflows. And this would, we'll have to divide this in a way such that let's say one of the project picks up to do this for their container image generation. Maybe other project can pick it up if their major, whatever they are producing the artifact is it like for instance, a binary, right? So they can pick up another action or another action item. Similarly, we need to figure out wherever we are releasing if they have an implicit way of verification. And this will ensure us or I didn't make us come up with list of places which may not have that option. And then it gives us a picture of how do we provide that verification information for other things. And I can think of cargo package manager for us and maybe may one, I'm not sure if they do have like implicit option to store these metadata. Maybe GitHub container registry does have, right? So any thoughts from others if we were to pick up an action item and then each project can do an experiment and see if they face an issue. We'll create a playbook of commands to be run probably it's pretty easy experimenting with six store itself, but we could also create a playbook and I can share it across. We need volunteers across projects to check if this works and how about we list which artifact registries we currently use. Docker Hub is also one. Interesting which project continues to use Docker Hub. We still use it in fabric. In fact, I thought we hadn't settled on whether we could go to GHCR because of the limitations on the uploads. I would do mean in what way. When we talked about this during the project best practices, I thought we didn't conclude. I think we concluded that GHCR was the right direction, but we didn't conclude that there might be issues around size of the uploads and the frequency of the uploads. I think we thought we thought that GitHub wasn't enforcing the limits at this time, but we thought that they might in the future. And that was that was what was at least holding me back on the fabric side. If they do we're already set up for billing for this so I would say go ahead and switch to GHCR and if the bill goes to more than $1,500 a month then I will say something. But right now the billing for storage has been on the order of less than $1 a month. So give it a whirl. Okay, sounds good. We're also using PyPy and NPM, which I assume would fit. Also crates. Yeah, rust crates. I assume this covers the break that we have. Does anybody use and like generate something that gets thrown on GitHub directly, maybe a CLI or something or any anybody. Yes, David. Yeah, we do attach binaries release artifacts to GitHub. Any project I know since I worked on it long ago sort of project used to have a Debian release. Does any other I know fabric also had like native. Going to Debian releases right maybe in initial versions. We are also missing. Jay frog. I know that things you and fabric I think are using Jay frog. Hey, feel free to add more items to this list but I think this gives us a good picture if we can experiment across all these need volunteers who can try out. So maybe the first action item could be creating a playlist or playbook with list of comments. We're execute so that we know exactly what we are trying to do across all these when when multiple people are experimenting across these repositories. Maybe I can make this action item on myself should be fine. Or if others want to volunteer, please let me know. Dean and need volunteers across across these. Registries. Any takers or should we start designing. So I don't know what you mean exactly by designing I mean, you know, I believe most project use if not all GitHub actions now. I mean basically you need to change your workflow for the release so that you sign the binaries and you know as an extra step. So I'm not already suggesting that we create a GitHub action and when we sign store that information needed for verification in GitHub releases because since all of us are using GitHub action. No matter which of these registries is use or where it is pushed. We store the verification info on under release is that what you're suggesting. Well as a first step I think that's pretty much what I would do yeah. That sounds like a logical next step for us heart. Yeah I just agree with Arna. Perfect. So until we do that I'm at least I don't have additional topics to discuss in the task force. Anybody else have any other comments. I just one thing I want to share is you know, there's one thing that's a bit. You know it depends on you feel about it but you know my experience I was playing you know that's a little over a year ago so when I first started playing with this stuff. You start using cosine and just to you know fool around experiment with it and you may not realize that it actually creates entry in the public transparency log and it's there forever. And actually cosine will say something about hey beware that you know we're going to upload some identity identity you know information and there's just no way to delete that. So, if you, you know if you want to look up the raker public instance, you could find the entries that I registered just playing around. And so I, and it really doesn't matter I mean you know it's nobody's going to blame you for it. But I, after a while, fooling around with it and doing experiments, I felt self conscious about it. And I created, you know, you can run a local raker instance so that you can redirect the registration to the transparency log instead of using the public instance. You use your own, and that way, you know, at the end you can just delete it and you haven't, you haven't, you know, kind of littered the public registry with all your little experiments. And so cosine will let you define, you know, specify it by default it goes to the public registry but you can override that with an argument and say no connect to that server instead. If in the if you're using github actions, you're going to have to make sure this is publicly available which may not be easy. But, but that's what it would take. So just wanted to share that. I see you're coming from hard on. No, I'm going to have to go look into what you did now. But no, I think it's fine to play on it if they, you know, yeah it's nothing really exciting I'm afraid you know it's like signing artifact one that has a echo food in it or something like it's a file with echo food is that kind of stuff is not very exciting but it's there now. You can actually, you know, create, you can use your own private repo and play with it. That's what I did. Okay, thank you room for taking us to the task force. Are there any other topics that anybody would like to discuss before we close out the TOC meeting today. No. Okay. Well, I hope everybody has a great week and we will see you again next week I think Bobby you're up you said you wanted the October 12 slot is that still accurate. Yes, it is. We will make sure that we have time for that next week and have a great week everyone. Thanks Tracy. Thanks everybody. Thank you. Bye.