 Thank you so much Candace. I'm thrilled to welcome everyone here to a webinar hosted by the joint development foundation part of the Linux Foundation family of projects where a nonprofit organization that's dedicated to hosting standards and specification projects that leverage open source and open standards best practices. We host projects that are as small as community specification efforts all the way up to projects that have grander designs to be part of the international standards development process. We are past submitters, and we're very proud to host the coalition for content provenance of an authenticity with us today is Leonard Rosenthal who will be kicking off a presentation for more information about the JDF you can go to joint development.org. Leonard. Great. Thank you jury. Thank you Candace. So appreciate everybody joining us here today to talk about the C2PA. So the coalition for content provenance and authenticity or C2PA of course for short is as you heard a Linux foundation joint development foundation project. We founded ourselves in 2021 and our mission is to develop technical specifications that can establish content provenance and authenticity at scale. And the reason we're doing this is to give publishers creators and consumers the ability to trace the origin of media, its history and the like. So this is what we've been focused on since then. So let me talk a little bit more of course to dive deep into what we do what we have built and where we are going. So many people may have also heard of a partner organization that exists the CAI or content authenticity initiative. So it's usually helpful to explain the relationships between the groups. So the C2PA itself. We are the standards body we like to think of ourselves as the architects. We create the plans we define the standards. We figure out what the technology is. And then the CAI we like to think of as the contractors or the builders and that they're responsible for going on and taking the standards that we build and working with people to create implementations to educate and establish policies and we do this together. It's not like, you know, we just sort of throw it over the fence of course, just like again a good architect and a good contractor work hand in hand these two organizations do that as well. But that's sort of how we see the relationship between ourselves. So the C2PA itself has almost 100 members at this time, separated in the normal JDF fashion between steering committee members, general members and contributor members. You can see lots of logos that I'm sure you recognize from my own company Adobe, the BBC, Sony, True Pick, Nikon various hardware manufacturers. You also see organizations like witness that are nonprofit organizations, all sorts of groups, entities, commercial, and otherwise have joined our organization. We've also established formal liaison ships with a number of existing standards bodies, the ISO, the international organization for standardization, and I'll talk some more about that a little later towards the end of the presentation. The IPTC, which is standards body responsible for metadata for images and video. Etsy, which is the European telecommunication standards Institute, which is responsible for European standards and we specifically focus there on their digital signature standards. So we can align and comply with EU regulations. And also the PDF Association, which is where a lot of PDF standardization efforts take place. So very wide membership, and they really of course bring in lots of ideas and help us reach many different constituencies. So as our name implies, we certainly focus on provenance that's in our name. We also focus on education and policy and I'll talk about the policy side a little bit later in the presentation. The one area we do not do and we do not do it on purposes we do not do detection. We believe very strongly that detection is an arms race, and it's a race you cannot win. The better you get at detection, the better someone will get at faking things and getting around to your detection and so we believe very strongly and provenance. So you don't have to guess what is fake, what's not real, what's not truthful. Instead, the information is provided along with the content, or the asset. The other reason that we believe very strongly provenance is that edits are good. So detection assumes that once something is created, it's no longer modified. But that's not the reality of the world in which we live or have lived for quite a long time. Whether that is tools like Adobe Photoshop, for example, to do editing, but even your camera itself has built in editing tools. Usually the photo that comes off of your device is not the same one that actually was recorded by the sensors on that device. In the case of documents, documents are combined, they're digitally signed, they're annotated. All of those things are perfectly reasonable operations that need to take place on content. And so those need to be reflected as well. So all of that is part of the provenance. And in this case, provenance represents the who, the what, the where, the when, the how and the why. So all of these aspects coming together, representing again from moment of creation, all the way through to when the user consumes that content. When we started at the C2PA, we have a series of design goals that we established for ourselves. One of the ones that I usually like to bring up in presentations like this, again, represent a small section, but they're the key ones, I think, as it relates to your understanding of what we've done. So the first one was that we did not want to create anything new that we didn't have to. Don't reinvent the wheel. Our goal was to build on prior battle tested technologies and techniques, and we've done that. We really have not invented anything new, but instead have taken a lot of pre existing technologies and put them together in unique and creative ways. And that's really made it easy for people to implement our solution, because they were able to use and leverage existing libraries existing tool sets that were already out there, and that the amount of, you know, the openness was just how the pieces went together. We'll talk a little bit more about that. We do not require cloud storage distributed ledgers or blockchains in our solution, but we absolutely and positively allow for them. A number of our members have built solutions that connect the what you will hear of called content credentials are technology and cloud and distributed ledgers and blockchains because they have very good use cases to combine the pieces together. So we're happy to have that, but it is not a requirement at all of our core technology, you can use them entirely offline entirely disconnected. This is important because we wanted to ensure that our technology could be used anywhere in the world, even in places where there was not an internet connection where they were using decades old technologies and devices. It all needed to be supported. And so this is why we didn't put any modern technology requirements, or infrastructure requirements like there's an internet was not even a requirement of our solutions. We are building a solution have built a solution that establishes this audit trail this trail of provenance across multiple tools as I mentioned from creation modification and consumption. And then this has to work in any asset format any type of content, whether that be images, videos, audio documents, 3D AR VR, we are media type agnostic we are format agnostic. And I'll talk a little bit more about the various ways in which those pieces go together. But this is all part of our design goals that we established up front and continue to live in function by as we do our work. So as mentioned, we have a specification we are currently at version 2.0 of that specification that we just released at the end of 2023. We published the first version in 2021, less than 12 months after we founded the C2PA. We don't have one document we are currently at a total of eight documents. We have two documents that are normative in nature, our core specifications around content credentials and a document that goes into detail around an area called attestation I'm not going to spend any time on that one today, but I wanted to mention it. We then also have six informative documents that address very specific areas, they range from our introductory documents are explainer and our guidance document. We have a complete document on user experience guidance because it, as you'll see the user experience that goes along with this what do consumers see what are consumers going to expect is just as important as the underlying technology and so we've established user interface or user experience guidelines. We also have a threats and harms task force that has produced two documents in terms of security considerations and harms modeling. So we believe very strongly that you can't release a technology without understanding all of the security and harms implications and so we have documents around that as well. So what is this thing what is this specification I've been talking about. So it is a model for storing and accessing cryptographically verifiable and tamper evident information, whose trustworthiness can be assessed based on a defined trust model that sounds great. What does it actually mean in practice what are we trying to do. And this way, I think is the best way to understand it. So I mentioned that our technology we refer to as content credentials. So on the left, you can see a picture kind of describing what's in a content credential. And so a content credential is the user term that we use to refer to the technical term of a C2PA manifest. So our C2PA manifest is the data structure it is a binary blob of information. It's actually structured according to an international standard ISO 19566 part five known colloquially as jump is the binary blob standard. So it's a pre existing standard that we were able to leverage to put all of our pieces together. So that's the C2PA manifest. And inside of that manifest there are three key components. There is first and foremost what we call the assertion store. So this is a collection of assertions and each assertion represents pieces of information about the asset that is being declared about it in some way. So this particular example. So that's what's on the left on the right is the user experience that I mentioned before that you might see based on a manifest. And so the arrows then sort of point the way between the pieces. So, here on the left you can see I have an identity assertion, and it points over to the right telling me that this particular asset was produced by john Smith. So this is an identity assertion. This particular asset has an actions assertion. So this is telling me that there was a whole series of edits and activities that went along with this asset after it was created colors were changed. It was combined with some other assets, and its size or position were adjusted. Because as you just heard, there were additional assets that were brought in we call those ingredients. So there are references to all of the ingredients that were used to construct this particular asset so tiny hard to see here. But this is a picture of the pyramids in snow with a polar bear walking around. And so you can see that that was composed of two other assets, one the snowy pyramids and one the polar bear. And so we have all this information, telling us about this and of course, we can also dive into that snowy pyramid ingredient to see that that of course started with the real pyramid shot, and somebody then added snow on top of that so we get this full chain is full history of the provenance of this asset. So those are all of the assertions that were included. And then see that a claim was made so a claim is made by a specific piece of software or hardware called a claim generator. In this case it was edit suite. And it put all of this together it was responsible for it and since the claim generator. And then all of this information is digitally signed by the edit suite. And so we have a trusted time stamp for that which was back in September of 2021 in this particular instance. And so we have standard digital signatures, we have a whole lot of other cryptography that goes along here. So every one of these individual assertions is signed there then are sorry is did cryptographically hash that hash is then hashed again by the claim, and then hashed again by the signature. If you're familiar with the technology, we use something called a Merkel tree approach. And we're leveraging these pre existing technologies. We're not reinventing signatures hashing none of these things we're leveraging existing components for putting it together into this unique fashion to get you a content credential. One of the other things that we have so we talked about these assertions. We also have a series of assertions that are focused specifically around generative AI, which is a, as I'm sure many of you know very big these days. And certainly making a lot of news. So we established three different types of things specific to generative AI or in some cases even AI in general. So first and foremost, we want to establish that a given asset has either been created by or modified by a generative AI system. And potentially even the region of interest that was modified so it was completely created from whole cloth. For example, you went into something like Dolly, or, or the like, open AI, and you typed in a prompt, and it produced an image for you, that would be the whole image created by generative AI. Alternatively, you could have gone into a tool like say Photoshop, and use generative fill where you took one section and you said, you know, add a piece here, or you went into a video and you modified a couple of frames of the video. So if you use document, for example, you're in something like Microsoft Word, and you use Microsoft's copilot to have it improve the text of a particular section of your document. Each one of these things we want the identification that generative AI was used to be part of that provenance. So that's all in there. You can absolutely do that either for the entire asset or individual regions within the asset. You can enable the ability to include what we call the recipe. So you heard me talk about ingredients. Of course, you need to know how to use those ingredients and that's the recipe. So recipe is where things such as the prompt that was used, the AI model that was used on other types of information can be incorporated as well in as various assertions, so that downstream. You not only know that generative AI was used but how it was used. And that of course, very important provenance to be included. On the other side, we wanted to enable creators to be able to label their content is do not train so that if you're a creator of a piece of content or a publisher of a piece of content. You have the ability to incorporate into that asset. An assertion that says whether or not it can be trained and on and if so what type of a eyes could be used whether it's generative AI, or other types of AI, whether it can be used during training or inference or both. We have a very flexible system for doing this. We modeled our work on some previous work by the European Union. The EU has a standard around text and data mining. And so our model is fully compatible with that TDM standard, TDM legislation out of the EU. And here's not only leveraging existing standards, but it ensuring that our technology aligns with those pre existing regulations and legislations so that it can be used in various parts of the world. So I mentioned before that we are compatible with a lot of different types of media. So we focus that especially on the area of embedding. So while you can establish a C2PA manifest for anything, whether that's a plain piece of text or an AI model. We also want it to be embedded and that's the normal way in which C2PA manifest these content credentials get carried along with assets as they're embedded into them. And as you can see here we support numerous types of images of videos of audio documents and even individual font files can have manifests associated with them and embedded into them. So in addition to having them embedded, as I mentioned before, these can live in file systems, they can live in clouds, they can live out on block chains or distributed ledgers. We've established numerous standardized mechanisms by which the manifest can refer to the content and the content can refer to the manifest. So even if they're not embedded inside of each other or packaged together, we've utilized again standard mechanisms such as URLs and URIs, HTTP headers, specifically something called a link header, and more to connect the pieces when they are disconnected from each other. And I'll talk a bit about that in a minute. So you can think about provenance again, going as we said from creation, through editing, through publishing, etc. And each time another manifest gets added to the mix so that we're able to continue to see this provenance throughout the chain. And in that example I showed you that can even branch from a simple linear list out to a tree. So you could have this tree of provenance because each of the ingredients that you use could itself have provenance associated with it and so you could have this deep tree of provenance that can be explored to understand where something came from. So I mentioned that we utilize pre-existing cryptographic features, cryptography. So I want to talk a little bit about those. So I mentioned that we use standard cryptographic hashes. We support a number of them, including the Shah family of hashes to enable individual components to be identified for if they have been tampered with. We also support what are called, we call those hard bindings. We also support mechanisms for soft bindings. So these are used in similarity lookups. You may be familiar with the term perceptual hashing. We support those kinds of capabilities as well. Our digital signatures are based on the same X509 certificate model that PDF and the web is. So that lock icon that you see in your browser utilizes this exact same cryptographic technology that we do. If you digitally sign a PDF, you're again using this exact same technology. So it's well established. It's been around for over 30 years and so we've put it to use in the same way. And the idea of trust lists and certificate authorities, again, all well established technology put to use to now apply to any of your asset types, not only to the web, not only to transport, not only to PDFs, but to your images and your videos and your audios and even your fonts or other things that you would like to have established that provenance in a tamper evident way. And these are just part of what we call trust signals. And the reason that we think about this idea of trust signals is that trust isn't binary. First off, it cannot be determined by a machine. A machine cannot tell you whether or not you can or should trust something. Only a human can make that decision. And a human makes that decision by looking at the series of signals incorporated into an asset. You know, where did it come from? Has it been modified? Who was involved in that process? When did it take place? All of those signals that we talked about help you as a human to make that decision. And the fact of the matter is that we recognize that two humans, given the exact same set of signals, do not necessarily come to the same result. So consider that if I had a piece of content that without question that, and I showed it to two individuals and it had completely verifiable provenance showing that it was from CNN here in the United States. And nobody questions that. There's a group of people who would, without question, they would trust it because they trust CNN as a news source. And then there's a group of people over here that would not trust it at all because they do not trust CNN as a news source. So same piece of, same asset, same trustable provenance, same establishable provenance, but people will interpret the signals differently. And that's why trust must be determined by a human. So our goal is to provide all of those signals as part of the content credential and as part of our provenance. And for you as the consumer to then make that final decision based on all the information that we can provide you. As I mentioned before, our goal is to establish to connect that provenance that credential with the asset we want an embedded in the asset as it goes around through its lifespan. Unfortunately, there are existing tools today that will remove the provenance because they don't know what it is. For example, if you upload an image with a content credential in it to a site like Instagram. Instagram will remove it before anyone ever sees it not because they're bad not out of malice, but because their goal has been to keep files down that file size was more important to their consumers than establishing provenance. Now, meta, the owner of Instagram and other sites has come out recently as a huge supporter of C2PA and content credentials. And one of the things that they're going to be doing as part of that support is no longer stripping that information, because now they recognize how important that provenance is to them and to their customers. And so that will take care of them, but it will take time until everyone gets to that same point. It's also possible and we understand that bad actors exist, unfortunately, and that they too can remove the provenance. This is the one downside that we have faced by not being able to create nor would we wanted to create our own file formats. We went to create a new image format and a new video format and a new audio format simply wouldn't make any sense. What made sense was for us to establish this in the context of all of these pre existing formats and the ecosystems in which they exist. So in order to address this, we actually take a three legged approach. So provenance is is key that manifest that credential is the core leg of our stool. But we also recognize that in order to make that provenance available, even if it's potentially stripped, we need other techniques. And that's where things like watermarking and fingerprinting come into play. So we are focused right now one of our big efforts for 2024 is not to define new watermarking technology. There's lots of great existing standards out there already for watermarking of various media types, simty, for example, maintains the standard for video watermarking. There are companies that have established themselves in the area of image watermarking, document watermarking and the like. But what we're doing is we are working with all of these companies and other bodies to establish an interoperable means for existing watermarking technology to bind with and connect with those content credentials. And that that is what will be that piece to connect provenance with watermarks and with fingerprinting so that we have our three legged stool in place and in a variety of mechanisms and pre existing workflows. Which again, as I mentioned, big part of what we tried to accomplish. So we as I mentioned we've been out since 2021 and because of that there are lots of implementations, you find commercial implementations from Adobe from Microsoft from Nikon and Sony from DigiMark. We also have implementations not only in software but in hardware like the camera manufacturer released the first camera in 2023 that has built in provenance every picture that comes off of that camera has a content credential embedded into it. Qualcomm the chip manufacturer now ships it in their mobile chips so that any smartphones or other devices made using the snapdragon chip from Qualcomm again has that provenance built into it in a secure environment. We also see this in open source solutions such as exit tool and coming soon to other solutions. We're in the open source realm and there are open source implementations out there and a variety of languages that will enable folks to just jump on this immediately for their own implementations and their own workflows. And we're seeing this in a number of places we've been partnering for example recently with the Al Jazeera news network wants to is utilizing open source technology to establish credentials for all of their news. We think that especially with everything going on in the world today. That's a great opportunity for folks to be able to consume that content with credentials associated with it. So I mentioned earlier that we're also involved in establishing policy and other standardization efforts so I thought I'd raise a few of those here. So first and foremost, you may be aware that a lot of countries around the world today are in the process of establishing or in some cases have already established legislation, especially around generative AI, the ability to for users consumers to identify assets that have been created or modified with generative AI. That's a huge item today and a lot of countries. They also are wanting to make sure that the creators the owners of assets can establish their rights around whether those assets can be used for training in AI and as you saw, these are technologies and features that we've incorporated into C2PA. So we have been helping governments around the world, the US, the UK, the European Union, China, Japan, Singapore, Australia, New Zealand and many more to help them define their legislations around these technologies. And it's not to say that we need them necessarily to mandate C2PA, but we want them to mandate the establishment of provenance based standards for assets, establish that the idea of stripping metadata, stripping that provenance is problematic as well. And the utilization of techniques such as watermarking and fingerprinting, we're relevant. So we are, as I said, so this is great. These you're seeing these come out almost daily sometimes. China and the EU have been on the forefront of this. The EU and the UK have also, I'm sorry, the US and the UK have recently established things and you're seeing it as I mentioned from other countries. One of the things though that has come to light is that a lot of these countries, or should I say a lot, many of these countries prefer to reference standards that come from formalized standards development organizations or SDOs. So while the C2PA is the best place for us and under the auspices of Linux Foundation and the JDF for us to continue to develop the standard and develop it as quickly as we have been able to, it's also imperative to us that we have a version of the standard that can be referenced as an ISO standard. And so we have two things that we have been working on to make that happen. The first one is one that we have been working on for a number of years now in conjunction with the JPEG committee, which is to be formal, the ISO, IEC, JTC1, SC29WG1, if you want its full name. We like to refer to it as the JPEG committee, it's a lot shorter and people know what we're talking about. So they have a standard that is currently in DIS, draft international standard status called JPEG trust. And that is you see the number 21617 part one. And that is of currently as I said out for DIS ballot. And what does this do it builds on C2PA so it is fully compatible with C2PA it builds on our architecture and infrastructure to focus on the JPEG family of standards so not only JPEG one that we're all familiar with, but JPEG includes JPEG 1000 and JPEG XL and JPEG XT and something called J-Link and J-Snack. There's a whole series of JPEG standards. And this is not only for still imaging, but the JPEG family of standards also includes video, it includes multi frame imaging, it even includes things like 360 degree images, all of which are part of the JPEG family and which JPEG trust wants to connect C2PA in ways that are very tightly bound to the JPEG family of standards. And this is what JPEG trust represents. And so JPEG trust. If you're working with images if you're working with the JPEG family of standards. It's great. It is I said in this and we expect publication this summer. So a lot of work has gone into it, and there's some great extensions to C2PA. So it's not just are even connecting directly with see the JPEG and C2PA they've added some new and additional capabilities as well on top of C2PA that C2PA is considering bringing back in in future there's a lot of really good work in JPEG trust. So that's a great thing. But we also want to establish C2PA itself as its own ISO standard. And so we have been coordinating with ISO TC171 SC2 for those of you not familiar with those terms I forgot to explain them. TC is a technical committee SC is a subcommittee. So this is the committee that is responsible for among other things authenticity of information at ISO, every group, every TC at ISO has a charter. And so this group's charter includes authenticity of information. So we've been working with them for a while now and in fact, tomorrow, the ballot will close establishing C2PA as a class A liaison. What that means that's a special class of liaison with the ISO through which we can we the C2PA can submit our standard through a process known as fast track publication. There's a bunch of other abilities as well. But that's going to be our plan. So when the ballot passes tomorrow, we assume it will pass I have no reason to believe it will not. A new working group WG number 14 will be created in 171 SC2 and the C2PA will then proceed to deliver to that working group. And what I lovingly refer to is ISO a five version of the specification. For those of you that don't work in ISO or other standards bodies, every standards body has their own way of doing things their own requirements, even their own language for how standards are done. So certain words that you can and cannot use and when you can use them and how you can use them. So, while we've done a fairly good job of, you know, being very prescriptive in the C2PA standard, we are not consistent enough to meet ISO standards so we're going to have to do a little bit of work to make it up bring it up to ISO standards. That should will not be a problem we have a number of folks myself included who've been through the ISO process many times. So we'll deliver that that group working group 14 will meet in May to review that first document, not technically but really focused on these editorial areas so in other words, did we get all of the right ISO bits correct. I guess anything what do we have to fix to really make it a proper ISO document and then coming out of that meeting. We will also establish will then go on to the DIS process. And we hope, again, all things going as planned that C2PA itself will be a publication by the end of 2024. So that will give us by the end of this year to ISO publications that folks can directly reference for their national standards if they're so inclined and in fact, we already know that China, for example, it's already moving ahead in this direction in establishing JPEG trust as their national standard. Now they don't know about this other work. So that may they may include both, but they are already on this track and that's wonderful. We'd like to see that happening. So let me close and then I'm happy to take any questions that have come up during the webinar, anything that anybody has. So as I've mentioned, our goal and what we have accomplished in the last couple of years and will continue to accomplish is that establishment of a set of standards that will be used for provenance of images, documents, time based media, such as video and audio and even streaming content. I didn't mention that, but that's one of our efforts right now in 2024 is to complete. We actually started to work previously, but to complete our work on live, real time streaming content, which also needs provenance as well. And then, as mentioned, our goal is to now have that standardized ISO in multiple fashions and so that it can be more easily adopted as a national standard. So, thank you all very much for your time this morning. Let me see what kind of questions we have. So Manette, let me see what you. Sure, that's a great question. Manette wants to know about the copyright of the standard who owns the artifact. Oh, copyright. Sorry, not of the standard, although that's a good question to I might talk about that next. But who owns generative AI so that's a question that we are not trying to deal with that's a very legal based question. What we want to do though is allow someone to establish copyright if they are so inclined. So, we're not dictating who can do that or what can do that and if impossible if someone decides that a piece of generative AI can establish copyright don't know that yet not our decisions to make. But our standard allows both humans organizations and machines to declare that copyright aspect. So just like I had and I'll go back here. I can show this in the example. But just like we have produced by john Smith, we could also have here in the UI if it had been established copyrighted by john Smith or copyrighted by john's company, or whomever wanted to establish that. So we have the places to establish copyright. We don't say who can or cannot do that. Okay, so that's part of it. Also same with created by AI. So be created by AI. So if you've got this identifying of creation by generative AI, then we also establish what system did it so I don't have a picture of this I'm sorry, but you would see created by generative AI using Dolly or using Adobe Firefly or using Microsoft Copilot or Google Bard or whatever system it is. I guess it's now Google Gemini they changed their name yesterday or the day before. So whatever it is, you'd see that as part of this content credential information. So all of that gets recorded in there and then exposed to the end user. So, yes, absolutely all of that is part of the provenance and I hope that that addressed your questions. Anybody else have any questions. Anything about the work we've done the process we're going to go through for standardization. I'm happy to dive into technical bits if anyone has a detailed technical question they want to ask floors yours. Maybe I did too good a job. Sometimes that's a good thing that when you answer everybody's questions already. Yeah, you know so again this is what we've been working on we continue to do this, we meet regularly every week, we are a very fast moving organization, because this is a very fast moving area, as I'm sure many of you are aware so we are frequently just to give you some other pieces about our work. So I mentioned that we have our technical working group that I chair within our technical working group we have about nine different task forces. So each one of our task forces focuses on specific areas. So we have one focused on AI and ML. We have one focused on trustless we have our user experience task force focused on that we have one focused on water marking as you heard one focused on live video, etc. So this enables us to each one of those groups to have a focused set of individuals working on it and producing their results that all come up to the umbrella of the DWG. Okay. Leonard you want to leave a link in the chat for where folks can go review the spec at some point whenever they have a free time. Absolutely. So you can just go to our main site, see to PA dot org, and there's a link for specifications there or you can go directly to the specification site. If you want to bypass the main one but of course, you know, our main sites not bad either. So you could certainly start there as a place to go to get that information. Ah, Keith, let me see. Seems like a lot of apps need to evolve significantly to adopt and employ this capability. Well, there'll be a set of easily consumed libraries. Yes. So good question. We don't think that it is a significant changes based on what's already happened. The amount of work necessary, given a library and I'll talk about that in a minute. We've seen people added to their applications, you know, within days or less for initial implementation. It really depends on just how much you want to expose to your user what types of assertions you want to give the customer on how much more work beyond initial implementation is required. So there are a number of libraries available, both predominantly an open source today. There is a library in rust there which which also has C wrappers CC plus plus wrappers and Java wrappers. That's very popular. There's also a library in Python and a library in JavaScript that can be used either client side or server side for example in a node like environment. Those are also heavily used in a number of products. So yeah, there's a lot of options out there today. Like I said, if you don't want to use one of those complete libraries, you can go and pick other pieces and put them together yourself. So as I mentioned, we use standard cryptography, you can go get standard cryptographic libraries, we use standard packaging mechanisms, you can get one of these packaging libraries. You can use formats like JSON and Seabor you can get a JSON and Seabor library. So you could put together your own from other libraries, or you could get one that's already completely and totally pre built. That's up to you, based on your needs and requirements in your solution. So I hope that was able to answer your question. Alright, I'm not seeing anything else. Candice, let me turn it back over to you. And thanks again everybody for the time today. Thank you so much Leonard for your time today and thank you everyone for joining us. Just a quick reminder that this recording will be up on the links foundation YouTube page later today. We hope you join us for future webinars have a wonderful day.