 Okay. You're all set. Great. Thank you, David. All right. Welcome everyone to the identity special interest group for September 21st. Thanks for joining us today. My name is shark Holland. I'm a co-moderator of this group with Bippin Barthon and Tim spring. Today, we will go over some working group status updates and hear presentation from Eric Scouton, who is the senior engineering manager and Adobe's content authenticity initiative. And we'll hear about binding strongly vetted identity to C2PA manifests. So looking forward to that. This is a Linux foundation call. And so we are following the antitrust policy. And as a hyperledger call, we're following the hyperledger code of conduct, which are both links and written here. And we're recording and live streaming this meeting and those those links will be posted here later today. Let's see. I want to take a moment to do introductions as well. Would anyone, anybody would like to introduce themselves and talk about what you're, what you're working on. That would be, that would be wonderful. Eric, feel free to introduce yourself now or before your presentation either is great. Hi, I'm Eric Scouton. I'm engineering manager at Adobe on the content authenticity initiative specializing in our online hosted services and also in our open source work and that's part of what I'll be talking about today. Great. We're so glad you're here. Thank you. Would anybody else like to introduce themselves. And if everybody would like to put their name on the attendees list that would be wonderful as well. Let's see a few quick announcements. So, animal recently announced making areas framework JavaScript global framework. So there's a link here if you'd like to find out more about those projects. And then as well, this, this special interest group has a YouTube playlist with with previous calls from from both the groups that merged to create this one. There's the link there for that. Are there any other announcements that anybody has before we jump into working group updates. All right, and we'll just go over quick highlights of working group updates since there are so many groups. But if you have specific updates from working groups that you're involved in, please speak up and and let us know what you're working on and what is going on in those groups. So, in the Indie contributors working group, we debriefed the second Indie Indie summit which we held the week before last, talking about funding and technical options for the future of Indie so that was that summit was a great success. In areas working group, they've mainly been discussing unqualified kids to peer for credential protocols, among a few other things areas by fold. They had a two month hiatus in meetings so now they are back. They discussed new releases overly capture architecture. In areas the areas cloud agent Python user group. We talked about the zero 10 to release. I give a presentation on my PR to occupy for implementing SC Jots. Yesterday, so it's great to have a discussion on that. And then we talked as well about a non creds in in acupy update PR is getting those merged in. Let's see. In the areas framework JavaScript meetings. Let's see they've been talking about to come be to 050 release. A non creds they've been reviewing PR is talking about a non creds the 2.0. And as well the non creds are implementation update. Let's see for the trust over IP foundation. For a number of these working groups. I wasn't able to find more recent meeting notes so if you know of more recent meetings from any of these groups, please jump in and give any updates that you might have for the ecosystem foundry group they did meet recently and talked about the public review and continuum loop. And let's see concepts and terminology is also still working on the terminology engine. The two. Let's see. For the decentralized identity foundation. There was no September meeting this month for the, for the did come spec working group, since that was a holiday in the US. The did come users group has been meeting. It looks like they had a September 18th meeting with a few different presentations here. Let's see. The IOT special interest group has been meeting recently. I think they've been doing some planning on potential work items for that. Let's see. So that's mainly what we have in terms of working group updates, but I want to give a few minutes if there are others on this call who are involved in in working groups that they would like to report on as well to give a bit more detail than that very fast overview. That's great. Well, unless there are any more working group updates that anyone that anybody wants to jump in with. I'll go ahead and turn it over to Eric for your presentation. Thank you so much again for being here. Of course, thank you. Sure. Let me share my screen. I'm going to put myself in presentation mode. Okay. And unfortunately, I can't see most of you at this point, but if there are questions or whatnot, just speak up and feel free to interrupt whenever that makes sense. Hang on and I will share the long screen. We can see you keynote now. There you go. Yeah, do you see my presentation? We're seeing your. Yeah, okay, that's what I was trying to not have happened. Let me see if I can move that the other way. No, you know, really wants to do it that way. Okay, never mind. We'll just I know how we do that. I have two screens on my computer and it has it's very opinionated about which one it wants to use and it's not. I don't agree with it. Anyhow, technical challenges aside. Hi, I'm Eric Scutton. I mentioned before I worked for Adobe on the content authenticity initiative. And we are exploring some ways to bind identity a strongly better identity to publicly visible content. That's what this is about. So I'll start off I have some prepared slides but this is also kind of an informal working update. I sure saw me give a version of this talk in April at Internet identity workshop, and this is an update based on some work we've done since that time. So my agenda first. The grand plan when I set up for this conversation was to have a demo of this actually all working. I got as far as a unit test that passes some initial happy path earlier this week but that's not quite a demo. So we'll talk about what the content authenticity initiative is what CTPA is which I'll get to in a moment. I'll talk about how, how signing and content authenticity works today and then I'll talk about the proposal that I'm making to extend the data model to support self sovereign identity and strongly better identity and this is really an opportunity for me to get this in front of people who are familiar with the space and look at things potentially skeptically. So any questions you may have along the way. Please do share them with me and you know it all be taken as friendly conversation. You know, the, the whole point of content authenticity initiative was Adobe and some other companies about four years ago got together and realized that, you know, there are some technical solutions to the problem of misinformation and some of the things that we can address that are through education, informing end users and consumers of content about how they can learn about the veracity of the content that they're obtaining detection was discussed. We work with some of the smartest people in fake image detection and their advice to us was don't enter this cat and mouse game. That really the answer to that is in using photography to attribute and make strong assertions about attribution around content, and that's really the approach that we have taken with this. So the content authenticity initiative is now a community of more than 1000 companies and individuals and educational institutions non government organizations, who all share this mission of promoting the idea that there should be one standard of content authenticity and provenance across the internet. It is also the name of the team on at Adobe that I'm on that provides open source implementations of this, this technology. We also work with our product team partners Photoshop Firefly are a couple of major ones that are in the public eye these days, many more to come, and then my sub team within content authenticity Adobe. We also operates the host and services that support this. And then the other organization that I want to talk about is the coalition for content provenance and authenticity, often abbreviated as C to PA. And that is a technical standards organization. It's a joint development foundation under the Linux foundation that really describes, you know, in exacting detail the binary description of how we do this. provenance train and how we associated with specific kinds of assets. So any questions, thus far, if not, then you know the really high level overview of what content authenticity is about is that allows content creators to make tamper evident digitally signed assertions about what they've created. And it allows content consumers to evaluate those statements and use those to make their own trust decisions. It is not fact checking fake image detection as I mentioned before it is not politically opinionated. Okay, is similar to some existing metadata standards XF and XMP which we participated in creating. But the major distinction and something that you know just wasn't thought about 20 or 30 years ago when these earlier standards were created is that it comes with this tamper evident binding to the content that it describes. So I'll dive in at sort of a high level overview of how it works. We start with the idea of an asset and that's any piece of media. And that can be, you know, a video audio still image PDF document on and on that you have the C2PA is intentionally agnostic with regard to what kind of media is other than specifying, you know, the exact placement of the C2PA data structures within for instance a JPEG. Within an asset, there's what we call a manifest store which is really the description of the history of that asset as we understand it, and as the content creators have described it. And for those of you who are developers, you can think of this as a get commit history that is for that asset that it describes. And the one of the important things that we do is that we, we have this notion of ingredients, which are other manifests that were created by prior content creators. So, you know, the most recent piece of work that is presented to you might have been something that incorporated, you know, a handful of photos or videos or other media combined those and that that active combining is the creation that the most recent artist has done. So that collection that history overall is called the C2PA manifest store that's the blue box that I've shown here. And then within that are individual manifests and I'll now kind of circle from the outside or the inside of that manifest and work my way back up to the manifest store. So a manifest itself contains, as I said earlier, a set of opt in statements that we call assertions and those could be things like capture advice, who created it, what actions they took in that active creation. It does require a binding cash over the content that was created so you can't just attach a manifest to an arbitrary pieces content. And then it can also reference other content that were incorporated. And notice I've put a couple little little flags on a couple of items here I'm going to do that throughout the next few slides. And the reason for that is there are some things that are specific to the discussion of of SSI identity that I want to call special attention to so you know identity the content creator and redaction are things that are going to be important a little bit later in this discussion. So we start to work our way sort of up the train here up the tree. And so a claim is a data structure that lists the assertions and contains information about who created the claim this is typically the tool vendor so you know adobe the service that I operate science on behalf of adobe product integrations. And that contains a list of assertions from prior claims that might be redacted so for instance if you're an editor at a news organization and you need to obscure the identity of the photographer who created something that you're going to publish for that photographer safety, you can redact the identity assertion for that photographer in your subsequent publishing, you just have to leave a trail that says that you did that. And finally, a claim signature is a cozy signature over that claim data structure. And that basically just says, you know if somebody attempts to tamper with the content of the claim, or, well, the content of the claim that will cause the signature to fail. There's things in the claim that refer to other pieces of the content by hash links. So, any attempt to tamper with the assertions or the content of the underlying asset will also cause validation to fail. And then a manifest is is a jump data structure that that ties all of those pieces together. And the manifest or the collection of these manifests can be either embedded directly in the asset it describes, or it can be referenced by a HTTPS hash link or both. So with that understanding of sort of quick summary of the data model. Any questions about what we have in production today all of this by the way is has been in production for a year and a half to two years and a workshop and some other products from Adobe and from products offered by other companies as well. So where is the manifest store stored. That's going back to this picture. It can be embedded in the asset itself and of course each each file format has its own concept of where things belong. The question is for assets that have XMP metadata. There's a way that we define for the XMP to say, here's a hash link to an external storage. So if you don't want to put the entire manifest store directly in the asset if for instance it would make that asset too large. You can embed that hash link reference and store it on the public internet. Anywhere you choose. Okay. Yeah. And, you know, there are kinds of tradeoffs, you know, in the Adobe offerings we offer both options. So the signing of plans are going to circle back to that piece just a little bit. We use in the cozy signatures we use X509 certificates. One of the guiding principles of building the C2PA standard was we didn't want to invent much that was new. We wanted to rely on well understood things and X509 has been around for quite a long time well understood. The current practice is that the X509 certificate is held by the tool that are so again the service that I talked about that my team operates for signing on behalf of Adobe. We hold that in our own secure mechanism. And the, or the private keys and the sort of courses embedded within the signature. And our take on this is that that's something that's feasible for larger businesses, much less so for individuals and smaller businesses. But, you know, this is also now a 1.3 or 1.4 version of the standard. It's not likely to change. But there is definitely a concern about how do you allow the content creator to identify themselves. And that's really the focus of the discussion on the questions that I'm raising today. So the questions that I've been asked to answer on behalf of Adobe and also the C2PA is how do we define strongly about identity within this ecosystem in such a way that the subjects of such an identity can prove that they were part of the creation and also disprove that they weren't part of other acts of creation. So those flags that I mentioned a while ago, let's, let's talk about those. One of the places I put a flag was on the identity assertion and we specifically we reuse an assertion or a data model called creative work from schema.org. And those allow verifiable credentials to be embedded directly in them. I hinted at this earlier. That means that, you know, for instance, a subsequent editor can redact that assertion if there is a privacy or safety concern related to an earlier creator. But it also means, you know, right now what we're doing is placing essentially an empty or not empty but a verifiable potential that doesn't have any binding to the content. So, you know, if I were to hack a version of or obtain a version of the open source software, I could build a client that embedded anybody else's VC if I happen to obtain it and reference that correctly in this the status structure. And there's really no way to tell the difference between something that was done with intent and something that was done with hostile intent. So, the proposal that we're working on is one to deprecate that mechanism of simply including the verifiable credentials. And instead to add a new assertion type which is a verifiable presentation binding that content creator to the content and you know the current working draft if you will is there's a new did see to pay method which references the claim data structure I talked about earlier. Via hash of that main data structure. And so what does that look like in the data model that probably means the signature process becomes a new two stage process. And the first stage is, we build a version of the claim that doesn't reference these assertions because we don't have the hashes and so forth of them yet. And that sort of dotted line around that's labeled a right. So we create the rest of the claim data structure the rest of the manifest. Have the verifiable content or credential holders sign a VP request that binds them to that content, and then we create a new assertion that we then add to the overall claim data structure. And here then we come back and we add, as I say that new reference that new assertion which is a hash link to this, this signature that we just talked about a moment ago. And then we proceed with the existing signature mechanism so the tool vendor then signs off to say that they have seen this claim that includes the verifiable presentation. And on the flip side of that as a content creator, you know you verify the, the claim signature with the existing mechanism. And then you take the claim data structure once you verify that, you know, was properly signed subtract that reference that we saw before which should yield the claim data structure that BC holders sign, and make sure that that also checks out against their signatures. And with that, you know the questions that are sort of top of mind for us at this point are the ones I've listed on screen. And I'm open, I would love to hear questions about you know what we are proposing and what is likely to work or not work. Let me stop sharing and then we'll be open to discussion here. Let me see the contact or contact info that I have. Yeah, absolutely. I'll make sure that's on the, the meeting page. Okay, thank you. Let me I'll just copy the discussion topics out there I want to be able to see people and see the discussion as it happens. Oh yeah, that sounds great. What questions or ideas do people have. Yeah, go for it Sam. I have a couple have two questions that are totally unrelated to each other. Okay. One is, is that what has adoption been like within hardware devices like cameras obviously is one good application of this for the. So what's what's the uptake Ben are there vendors that have embraced this and in our providing a searches along with the with the photos taken. I'm reluctant to name names because I don't remember who's who has made their public announcements and who has not but I can say that the several of the major vendors that you know of have interacted with us and our planning cameras will be in market in the next year. So the big several. Yeah. Okay. So I need to be sufficiently big. So thank you. The other one is, is that are you in the exploration of providing content creators assertions on the content themselves. Are you involving some VC technologies can use dids, and some can't are you are you embracing does this part of that I may have missed it if you said it. And that's actually one of the questions that I wanted to raise is, you know, when I did an inventory of did methods I came up with something like 170 of them. And I don't see any way for us to support them all. And, you know, there are a few that sort of come top of mind that will probably address but I would like to hear suggestions of what are ones that are likely to be widely adopted. One thing I will say, you know, at Adobe, we are doing some work with ID vendors, or ID verification vendors can describe in great detail what we do intend to allow users to verify their own identity and wrap that in some way yet to be determined in a VC. But we also, you know, as you know, taking off my Adobe hat and putting on my CPA hat. You know, that's, we want to be sure that people who have their own identity are also able to participate in this ecosystem. That's for sure. So, since you asked and I happen to be unmuted. I don't think anyone expects support of all did methods. Right. Clearly, a much smaller list is the list currently supported by the universal resolver. And that that's work that originated in the diff and and has a smaller list so that there's a software package that has resolvers for you know what's a different method. So, and you can kind of tell which ones are serious by the ones which were, you know, care to be included. Right. But the other, the other thought that I have is that as long as you support make a couple of good choices, the rest will sort its way out in the sense that I think that there are folks kind of expecting to be able to choose one did method and like make things work. That's unlikely to be the case. I think different ecosystems are likely to have different did methods for various reasons. And, and the other thing is, is that since these are not things that we expect humans to actually interact with that software can bridge the difference between oh like I need to support several did methods and then I know that if they want to did that's going to be used inside of, you know, content, you know, attribution or whatever that they can just use, you know, one of the three did methods out of that. So, but, but props for considering did I know it's a little bit of a of a youngish thing that moves a little differently than some of the other specs have but but it's it's incredibly powerful from a from a future forward perspective. That's my take on it as well. Yeah, so props. Also, I wanted to put in a plug. Anybody, I think I've met you before I will be there again, next month. Yes, I would be great. I have one other question I just didn't want to hug the mic. Okay, you mentioned be able to prove that they were not involved in the creation of content, proving a negative super hard. Is there any more you can say about the approach there. It's proving a negative is perhaps an overstatement. But I maybe the better word would be plausible deniability that, you know, unless you see my signature might be signature as a part of this active creation don't believe that it's me. Right, so this is this is not necessarily proving that they were not involved, but giving them a mechanism to prove like when they are, and then you can tell so if I see a video online for example from a from a favorite artist. I would say I'm a huge weird Al fan. Then if we're Dallas, you know, you know, provides this attribution that these assertions on his stuff. And I see a weird Al video somewhere else. I could verify, you know, whether that's actually a weird I'll produce to video or not. Yes, that's the intent. And, you know, the, the analogy that I often make about this is similar to the one about HP versus HP PS right. I look back at some some content I wrote, you know, 10 or 15 years ago on the internet and I was surprised at the number of raw HTTP sites that I linked to at the time. And today that's something, you know, the lack of that is essentially a negative trust signal, right. And I think, you know, in time, you know, our hope is that the presence of C2K metadata on the, you know, the various forms of media that you consume is a similar trust signal. You know, the content creator intended to assert that they were part of this and in that way step stand aside from content that is impersonating them. I had a question as well. How do you think this work interacts with with AI? I'm thinking about how AI trains on content created by artists who are then not compensated or credited curious about that interaction. Right, a couple of different things. So at Adobe, we've taken the policy that anything that we create that is AI that a generative AI must be, must have this attribution so you can tell the difference between human created and created artwork. And then we've also provided through the C2K standard a way to mark your content as do not train. So, you know, if you're concerned about, you know, your content being part of a training model, then that's a way to opt out of that assuming that the, you know, who's doing the training is willing to observe that. And we've talked with several of the leading, you know, AI generators and they're largely on board with that. You know, and then in our own case, you know, we've made some statements about how we train our own AI. I'm not going to repeat those here, but just make sure that I get them right, but that's something that we've been very careful about. Right, right. Yeah, interesting to stop sharing myself back to me. It's sort of distracting. The screen is on me. Oh, let's see. Are you on the gallery view or speaker view. I don't know. I don't use them that often. Oh yes. Okay. Yeah, yeah, I think if you're on gallery view, then you should be able to see everybody's video the same size. Okay, I think I may be the only one sharing video. There we go. Thank you, trust me. Okay, I love the talk. My question was, and I haven't looked into it so this could be readily available on the website but is there a reference limitation that we can look at today at the beginning you said there was a unit test with a happy path done. For this specific case now that is not publicly available yet. The, the overall implementation of the C2P standard is available on open source if you go to open source.contentauthenticity.org that we have implementations available in Rust and JavaScript. Okay, and some early work on bridges to other languages as well. Our core implementation is built in Rust. So the Rust one is the. Yeah, yeah, that's that's where the dependency tree stops. All right, yeah, that's always the question I, you know, three implementations and which one is the one. Yeah, yeah, Rust is the primary one. That's where the real work gets done. Thank you. Yeah. This will be a PR to the Rust implementation. It's just not, not well enough make yet to put that on the public eye. I have another question. Is this format strictly for files in sort of the traditional sense or is it would have worked for streaming content. Yes, there is quite active work on streaming content in particular on video. I know, I don't know the exact state of that work at this moment, but I know that that is an area of active work and interest. Yeah, I think we want to make sure we're talking to the real Eric from Adobe right so right exactly. Exactly. Yeah, that's, I don't know if that's in the one three standard or in a forthcoming one for standard by now that that's that's been very actively described or explored I should say not described. So yeah, there are other questions or discussion points for the call. If not, I look forward to seeing you, or maybe some of you at IW in a couple weeks. Yeah, absolutely. Thank you so much for joining us Eric is a great presentation and discussion so we really appreciate your time. Yes, thank you. Thank you so much. Thanks everyone for joining. Bye bye. Thank you.