 Meeting welcome to the July 17 2023 and on credit specification working group meeting. Um, PRs to review and then I want to talk about an on credit and W3C format. We may be pushing forward BC got baby pushing forward on. Getting code behind that getting more code behind that. So I wanted to get feedback from the community on that. To make sure that if we are producing that, we can get to the right thing. After that open discussion, we can talk about anything people want to raise. We are recording. So I'll record post this after the meeting. Reminder, it's a Linux foundation hyper ledger meeting. So the anti trust policy of Linux foundation is in effect as is the hyper ledger code of conduct. Um, anyone want to introduce themselves new to the meeting, um, and new to the, you know, credits and have, uh, feel free to grab the mic and introduce recognize. I don't think I recognize Mateo, but welcome Mateo. Um, right. Um, we've got a few PRs to review. So I wanted to get those done, took a look through them today saw some things that I wanted to talk about on that. So let's get into looking at the PRs we have. We've got four of them as noted. This one was put in quite a while ago. We've done had a couple of back and forth on it. Um, this is to do with adding a link secret. So whoops, let me get these to the right spot. Okay. Um, I don't know, Mike or a reach or if you've had a chance to look at these, but it would be good. Looks like most of these, um, not much has changed. Oh, I see a little fix. Minor fix there, but the V becomes V. This is where I don't know enough about it, but, um, slight change there in content. This content is added. Um, Mike, would you be able to review this or have you reviewed it and is it accurate? I have not reviewed it. Okay. This is my first time seeing it, but this, this does look familiar. This looks like you're just taking exactly what's happening in the code and putting it as a spec. Yes, that's, that's what we're looking for. Or again, this is a lot of what a reach or is doing. But we do want to make sure that what goes in Is correct and I can find things up to these parts and I've got a few comments on Some of the other PRs from that perspective, but I don't know enough of interpreting this. I guess with these PDFs, we can't point out where, where in the spec we should be looking Um, Like to find the definition. So that's one of the things that I notice a reach or you did as well and they're doing it, which is all they do is point to the PDF as opposed to the specific section within the PDF. And even if it's got to be referenced by, you know, can't be linked directly inside the PDF, which I don't think we can do As the PDF is now we should be at least pointing to where to go. So Mike, maybe you take a look at these. Also wondered why this didn't get interpreted properly. Yeah, and the line above it too. I, you know what I think it is the markdown render is pretty finicky. You can't have spaces between the double dollars. Yeah. And so I think that's what's going on. Yeah, they see spaces. One of the things we could do is simply accept the PR and update, you know, make the tweaks to get the, to get those fixed. So that would be easy enough if the rest of it is more or less right. Yeah. I think that's all he did. So it's those, those sections relatively easy to see. I assume you can't do this on the fly. Can't do what Evaluate whether he's got these things correct or not like I've got the version up on my screen somewhere else. So I'm just taking a look. So he put like our caps aren't used. I think they're only used for the Pre-correctness proof. Otherwise, yeah, they aren't used anywhere else. So like it where it says our caps in that destruction version. It's just the, those are only used for the setup part. Oh, so it is used elsewhere, it's used, but it's not like it's not used every time you do a proof. It's only used for the correctness part. So we've probably ripped that part out. Okay. Yeah, the unfortunate thing is you can't really comment right here very easily. So maybe we could make a list. We could come back to this though. Come back to this. Oh, okay. Yeah. So if we could add it there, that'd be good. What do you want to say? Our caps belongs only in the correctness setup proof. So be removed from here? Yep. I'm trying to decide why he's saying it's necessary to check. Okay. So this is issuance. All right. This is the primary credential. This is the primary credential? Yeah. Yeah. This is the blinded master secret correctness proof. Okay. Let's see. Why do we need to find the inverse of you? That doesn't make sense to me. I don't see that in the math anywhere. Like line 320. I don't know why that's needed. Oh, I see what he's doing. Okay. So this reads a little weird. So probably what needs to happen here is just some clarifying sentence or something that says before issuing begins, we need to verify the blind signing request proof by doing the following. And then I would understand what's going on here. Say that again. Before? Before issuing, we need to verify the blind signing request proof. Yeah. I mean, that's Okay. As follow, like as follow using the following steps because I was just like what what's going on here. Okay. I see what you're saying. Not blinded link secret correctness proof. It's just a blind signing request. Maybe in the future, you might have more. Okay. I think in the code, it's called blinded link secret correctness proof. What did you want to call it? I'd like to call it the blind signing request proof because when we go to a non creds to that's what it is. So we don't have to change the terminology. Blind signing request proof. Yeah. Okay. Like if you've got attributes that you want to sign blindly, you have to do a proof that you truly know what those values are. I mean, you didn't just like get them out any other way. That's the whole point of it. Okay, this is more another ticket. So I can likely put that in Sure. The rest of this looks okay, other than once this gets in. Does the does the math look right. Let's see. I'm looking right now. That looks fine. That looks fine. I don't understand what 325 is just just expanding it out. Yeah. Okay. He's just expanding it. Okay. Yeah, that's fine. Okay, good. So in theory, we could accept this and just clean it up. Yeah, in the future PR. Yeah. Okay. I'm going to go ahead and accept it that way we don't have to worry about when he gets involved again. Okay. Okay. Okay, tales file generation again. I'll start with just the This view of it reaches on the call. So, whoops. I always do that. The throw the one. There we go. Okay. This is tales file generation. Which is right here. So change to list of primes to points on the curve. Biggest thing I saw here based on the bug that was found in the generation. It sounds like there's one extra point included and and that was actually the bug. That's in the implementation. Is that right that there's one extra Mike, are you familiar? I'm sort of familiar with what it what it's doing, but like I'm looking at the spec on my screen just to compare what I've got here to see if it's exact. And I wonder if it was just an off by one error. Let's see. What I understand the bug is is the first and the middle of for lack of a better term ones are the same. Oh, I know what it is. It is it is an off by one. So here's so here's the bug. Okay, when you generate points, you're supposed to go to like, say I'm going to create L number of credentials. The tails file represent that many. Supposed to create points like say Index one to all the way up to L and then L plus two all the way to two L. L plus one is special because it's tied to the same number of credentials. Index one to all the way up to L and then L plus two all the way to two L. L plus one is special because it's tied to the private key. So by including a point at L plus one it allows you to forge anything that one's not supposed to be known. Okay. So this definitely doesn't include that detail in here. Yeah, so we could add it. Okay, so we need to add that the reader. Are you taking notes here? Yeah, I've noted it. Okay, is that enough detail for you to be able to include it? Yeah, I mean, I have to just write the right just that we are removing the middle point from the tails file or else we have to show the maths also. Okay. So what should I also show the maths or only just say that the middle point of the tails file is removed because it's associated with the private key. Yeah, just say it's L plus one. So that it's the L plus one index. So basically you're supposed to create it like one to two L. But L plus one was special for whatever reason that, you know, five years ago when I talked to Yon, he had a decent reason, Yon Komenish, but now I don't remember what that was. I think it was because he wanted it to be not as predictable like if you always pick the first one, it makes it easy to guess whereas if it's somewhere in the middle, it wasn't. Oh, I see. But he did want to include it. Yeah, like it has to be there for everything to work out. But he said I just didn't want it as the first few indexes because it was easy to guess. But now we're leaving it out entirely. Out of the tails file? Yeah, it was never, it was never supposed to be in there in the first place. Oh, I see. But he didn't want it as the first one that's left out. That's right. Okay, and I'd leave out the reason why we leave it out. Just say we leave it out. Okay, one to L, L2 plus to 2L. That's all in the PDF. Okay. Okay, and then the other thing was, okay, this is good. What's this link? Okay. Oh, that's still to do anyway. I think this, oh, you've removed it already. Thank you. Thank you. Okay. And so this is right to plus 12 times the size of the revocation registry is now correct because the plus one, one of them will be dropped away. Yeah. Yeah. Okay. Okay. Let's, Richard, can you make those changes and we'll get this one merged. Yeah, I'll update it by today. Oh, oh, you know what I also wanted to do, or at least I wanted to ask you about this, Mike, I can add this as another one. Um, what data would be needed to create some test vectors? I was thinking if we had JSON that basically produced, you know, like inputs to this and then an output tails file in, you know, base 64 encoded as a test vector. A test vector to see if he's generally properly generated a tails file. Yeah. So what inputs would be needed? Uh, the main thing you need is the curve and just the gamma, which is one of the private keys. Okay. So you just say, here's, here's a gamma, a test vector, obviously don't use this in production. Exactly. And just the curve and that's it. And what does it mean by the curve? What would that include? Well, like for, um, and not for what we're doing right now in the Ursa implementation. I'm just going to call it the non creds implementation. It's using the BN 254 curve. But any pairing friendly curve would work. Okay. So you're saying, you just say what curve type it is. That's what you mean by curve. And then the gamma being the private key for that. And then from, and then you can put in the base 64 of a tail. Oh, and the number of credentials. Yeah. And then the base 64 that you can compare your results to. Okay. Yep. I'm going to put a ticket in that will, will, you know, have a couple of vectors in there that people can use it to for testing. Right. Because the curve tells you what, what are the generator points, what's the curve modulus, what you need, you need all of that. So the only, the only value you really need is the gamma and the curve. That's it. Okay. Excellent. And I imagine there's some of those in the existing implementation, I would think. So I should look there. There might be. Yeah. Okay. Okay, good. I don't remember. Okay, that one's close. Good. Let me get back to pull requests. This one, the only thing I was wondering is, is this specific to an on credits or did you, is this just a copy of what's in the BBS plus few of these are the copy and I found that there are some that are there in an on credits but not in the BBS, whatever I could find I had added. Okay, but I'm not sure that there is any that I haven't missed something so I suspect we can put this in and adjust it later Mike do you see anything anyone else see anything that is wrong. That would be wrong to put in here. And if there's anything obvious missing we can add it. Now I don't see anything wrong I just don't know if we need all of it. In particular. Do we need range, like the fourth one down. Maybe we do. It's hard to say I mean it doesn't hurt to leave it in for now because it's always the leader leader. Yeah, let's leave it in for now. I think we're good with this one. Objection. First one reach around nice work. Okay, last one. Okay, this is the credential definition process. Mike do you want to look at it here or there's fine. Okay, I reach this is the one thing where it would be nice to include a here's where again, it would just have to with putting a just put a section in there. Yeah, like section or the two or three or whatever you find this references section. Anything more. The rest of it looks pretty good. Good. I think the rest is fine. Excellent. Okay. Okay. Should this be to just matter of interest just again, random thought for me, is this a Lincoln and on credits RS or should this be to CL signatures. Is there a CL signatures repo. There is an on credit CL signatures is the new replacement for Ursa. I like this Lincoln. Oh, because I thought, I thought, okay, so you're extracting out the primitives from the actual protocol. Yeah, in Ursa they were all meshed is one which I hated. So yeah, this is this is part of CL thing. Okay, so should this. For some reason I'm having trouble clicking on a link, you know how tough that is. Nice. I love leaving out the age. Yeah, so this presumably calls. This is the call into CL signatures right. Yeah. Yeah. Like to me, eventually it could be renamed to sign, which would be very helpful. Instead of new credential deaf. So if you could change that one to a link directly into the CL signatures repo. Probably be a little better. Okay, so I have to link the credit CL signatures with the signing the creation of public key private credential key those are that function. Okay. Yeah. Okay, good. Excellent. Okay. All right. Good stuff and we'll keep going with updates. Over the next while. I wanted to get into. So as I said, BC gov is thinking of putting out a doing work possibly via code with us to support W3C credentials with an on credits. John Jordan came up with this term flux credits, which is a thing that basically a riffs off of what Manu Spornie and Neil john talked about where we're putting an on credits signature on a what amounts to a data integrity proof. We can have multiple signatures on a single credential and this signature as well as an on credits, so that we have flexible credentials that can be used in multiple scenarios so that's the goal of this. We can use and consume an on credits VCs and W3C format, such that we can fade out the on credits format and only use the W3C format so that would be where we're going long term. This idea of flex credits signing with multiple signatures so again so DHS for example can use these and and other government organizations that must be able to support miss signatures. You see with both when they use the nest they lose the privacy convert features, but but they can use them as the fallback and using on credits is primary and get the privacy preserving features of an on credits. Eventually do the same with JWT so that again we can have multiple signatures on a credential, but in JWT format versus the data integrity proof. And for the data integrity proof. Again, for those familiar with Jason LD. We don't have a way in an on credits to say oh this is the context I want to use. So the context would be pre defined and we would use what's called the vocab feature. And vocab basically says, here's the Jason LD I'm going to use and anything that is not defined in the context that I've used. Here's a generic string basically to use for any undefined attribute. And so for the attributes within the credential itself, we use the vocab and then we wouldn't have to have a specific Jason LD context for the, for the attributes in the schema itself. Yeah, Steve. Yeah, just real quick question on your first bullet on the last slide. Using W3C format and fade out a non creds format. So the W3C format right now allows you to use CL signatures in a ZK. So is is that where you're heading or the direction then to because that's an optional right now with W3C and I was hoping that would become less optional. What you're doing is allowing, you know, allowing an on credits signature to be a proof sorry and an on credits proof to be attached into the proof of a W3C format Jason LD data integrity proof. And with that, it gets processed used completely as an on credits, you know blinded link link secret. You know no subject. All of the same things you get in an on credits, but the wrapping the, the data structure is W3C format compliant. Okay, awesome. I just wanted to clarify as we got started I'm, I think this is excellent. Keep going. Thank you. And this is the work that Andrew Whitehead did a while ago to do the transformations into it we've we've shared that with the, you know the Jason LD folks we've done at in fact Patrick say Louie did an actual demo of this using a W3C wallet and changing the credentials and it all worked. So, this is to sort of complete the transition from proof of concepts into real, real use. So the first work before, before coding would be to formalize that the transformations that happen, because really that's all it is, we're just doing a transformation of the data elements into different places. But it's just simply moving Jason elements around. So a line with the data integrity proof standard is a question I've really got to ask of men who and Dave Longley when they saw this they said, Oh, this looks really good. There's a couple of things we'd like you to align with the data integrity proof so I just have to find out what those are. So that one's kind of a just ask some questions and read through the data integrity proof. For those not familiar a data integrity proof is really the basis of a W3C Jason LD credential. The data integrity proof is the actual proof format, and then the W3C is really just a profile that says, Oh, by the way, you're going to use data integrity proof, but you're also going to have these specific things that are required or or only used in that proof so you can use the data integrity proof for everything for it to be a verifiable credential that must have certain constraints on it, but that's it. Start with the vocab. The date is a required field in a non creds obviously it might be within the schema. So there might be an issue date within the schema, or it might have to be added because it's not part of the schema. Obviously if it's added it's not signed with a non cred so that's an interesting attribute so we want to define the handling of that. So it might not be signed. Um, so the way we do this I should have this up so let me just open this up. My thoughts are that the BC is just exactly that it's just a data format. You don't have to store it is that you can store the credential however you want, and then it just becomes like a transport format. That's basically what we're doing so this is what what it looks like in a non credits format. This is a credential. So if I come over here and I look at this one this is what it looks like now in a non credits format. So it's a single name attribute. Alice Jones and it's gotten encoding, and then here are the various elements all around it that are in the non credits. Okay. So you can see it looks like this. This is where the actual data is stored. This is essentially a base 64 of all of those other attributes that were in it. A few things go here. This is a sequence date that is outside of what is signed so an on credits simply signs these elements right. Plus it adds the link secret. This is outside of what would be signed by an on credits. So that's why I'm saying it's not signed as part of this thing. It's not part of this signature. And then you just add it in there. And then when you're when you're translating formats you just paste it in there. There's ways to do that but it's a bidirectional transformation so whichever when you start with you have to be able to loop back and produce it so that's why I'm saying, we've got to define how we handle it. Does that make sense. There's there's things you can do I'm going to formalize okay here's what we're going to do. That's probably the trickiest feature because it is a required field in an on preparing in w3c. It is the only required field. So if I come back to this thing. I don't have normally you would have a subject in here, or an ID for the subject, not necessary. And so we actually don't have it. So, you know, obviously in a schema you could put the subject in and you can put issuance date into here. So that's one way to handle it. Does that make sense. That's what I do in the long credits to. And I just use RFC 3339. 3339 is ISO dates. Sorry. Yeah. RFC three threes in the nine. Yeah, 3339. That's dating time, time on the internet time stamp. Yeah, that's what what I was. Yeah, that's exactly what we talked about last week in the, yeah, that we would use that and then encode it. So it also corresponds to ISO 8601. So you can say it's a combination of both ISO. Yeah, that's it. Okay. Versus, I didn't know that yeah I didn't recognize the number 8601 I recognize. But I'm sure that all of these are supported in the revocation in both the issuance and in the presentation and then revealed and unrevealed and self-attested in the verifiable presentation format. So we're going to implement the actual transformation. So Andrew has implemented all of these things. So there's, there's code to actually do this. It's just a question of exactly what they do. So he's written up the code to, to do the transformations. Whoops, that's the wrong repo. That's what I thought. This looks a little easier than that. So basically he's got a decode in an encode, or sorry, a 2W3C and a decode W3C which basically just manipulates the data to move it in and out of that signature field. So we wind up with the actual data elements. And it's just a pretty simple transformation. Two in the from, sorry, that's the two in the from. So just documenting that. So we have it. Document how to handle multi-signature BCs. This is where, again, coming back to this format where there's multiple proofs in here. So there's a, there would be a comma here with a NIST, a type of NIST signature and another signature field. Yeah, so one, one in one implementation we did, Steve, was, or Steven, sorry, was we just put in the proof field, we put types. And it was an array, array of objects. So then it looked similar to this or we'd say type, seal, sig, encoding, whatever signature, whatever value, you know, and so on. So we have multiple values in there. Yeah. And that's exactly what, yeah, that's the plan for what, what, for doing that with NIST signatures and so on. Longer term possibilities, as I mentioned, using the same thing but using JWTs. And then this one's a little obscure. So I don't have to go into this one but for those who understand JSON-LD and what a data integrity proof does, it's basically signing the entire context. So it assigns, signs the context of the credential plus the data values within it. That's, that's essentially what's being signed. So the one thing, obviously in on cred signs the data values, but it doesn't sign, well, the encoded data values, but it doesn't sign the JSON-LD because it has no concept of JSON-LD. But there are ways that we could possibly do that that I've come up with or thought of and may or may not want to go further with that. So that's formalizing what we're going to do. Then the next one is coding it. So the ability to receive a credential signed with a non-creds but in W3C format. So basically it's just transform it into an on creds and process it. You want to retain the client, the holder would want to retain both forms in case there's multiple signatures on it. So they might want to hold on to the non-creds format. They might want to hold on to the W3C format so they might have both. So for this, I'm not entirely sure where the line is between an on creds RS, the non-creds implementation and storage. Where is the line between say ACAPI and non-creds and the storage areas framework JavaScript and non-creds and the storage. So this is just a clarification to be done. How much of it is done in an on creds library itself and how much is just left to the holder software to handle receiving a verified presentation in W3C format. At minimum transform and process and return a, you know, verified not verified. And there's a question of whether return the data in both the W3C format that it was received in and the non-creds so that it can be saved in either format. Then, once we're able to handle receiving them we obviously want to generate them. So an option in the generate for the VC to say hey I want to put the data in an on creds or W3C format. So you may have received the data, not just in an on credits format but in W3C format already, perhaps with a proof already on it. So the idea there would be, you receive a W3C proof but you want it and an on credit signature attached to it. So an additional proof added so that may be the operation being done at which case. To take the data out of the format it's in now generated in an on credits, matching the schema and the credential definition being used transform to W3C format, potentially adding the credential to the signed and handing it back. And then this is some notes on, you know, the issue date field, whether it's part of the schema, or whether it's not included in the schema and what to do about it. The same thing with verifiable presentation format, obviously there would never be multiple signatures on it in this case so it's a matter of just generating the non credits transforming it to the VP and returning it with the issue date set to the current time. That one's easy. And then finally a demonstration of it by putting in coding it in Akapai, adding, adding ability to do multiple signatures on a single one so what's the API look like in Akapai. What is the API look like to say hey I want this in W3C format. My hope is that with doing that we get all the features we need in a, you know, if I define this as a, as a task for a developer can we get all the way through that. I don't know if people have any feedback on that plan but as I say the idea would be to put this, define this as a package and say this is what we want done as a coding exercise will probably do this part ahead of time so decisions are made ahead of time. But the rest of this would be a code project that we would publish out possibly with with funding attached to say hey implement this. So I don't know if people have feedback on that interest. Alright, if, as I say, we're likely bc dive is thinking of posting this so would will definitely let people know when this is posted out there. If that gets done and and encourage folks to think about implementing it if anyone wants to contribute or wants to help define the direction of this. So welcome your feedback on in doing that. Alright. The last thing was, is there any other topics people want to talk about on this call or is there any other topics to go over will do Steve I'll post the slides and share that. Thank you so much for your constructive meeting thank you all. If anyone has any questions or comments or anything let's go back and forth on discord. A reach is going to keep working with Mike on doing additional parts I have some to do is in the specification to work on so we should have a few more things in the next couple of weeks to talk about on the next call. And the plan is to do some more on an on credits to so next week it's the later call and we'll do an on credits to. Alright. Thanks all. Have a great week. Thanks. Thank you everyone. Bye.