 Well, thank you for joining the meeting today. And so today we have the pleasure of having Ken Ebert and Scott Harris join us from Inditio. And they've really done a lot for decentralized identity. And I'm really looking forward to their presentation today. We've had a lot of great meetups recently and very well attended job fair. And this time we're really going to get a good deep dive into trusted data ecosystems and how they can easily integrate into existing infrastructure, how easy they are to update and how easy to interoperate among one another. And really, this is going to show you how open source space continues to create sustainable, continuous improvement and innovation. So anyone who has any questions throughout the session today, just feel free to put them into the chat. And we're also live streaming on YouTube today. And then if it's something that we really need to dig into and do more, feel free to go ahead and raise your hand and then I can take you off mute. And then we can go ahead and have you interact with the team directly. So at this point, I'd like to go ahead and turn over the session to Ken and Ken, if you want to kick us off and walk us through what you're doing to empower trust. Sure. We're glad to be here from DCO. I'm the CTO currently. And Scott is our VP of operations, Scott Harris. And we enjoy working together, not only inside of our own company, but with the community to help build and grow the ecosystem that is based on some of the open source code that has been developed. Scott, do you want to take it away? Yeah, thank you, Ken. And thank you, John, that the description creates a whole lot of expectations for this presentation. I hope I meet them all. And Ken meets them all. We want to tell you a little bit about our company first. And DCO is in the business of trusted data ecosystems. And that covers a lot of ground. The trusted data ecosystem is the minimum viable setup that enables the exchange of verifiable credentials. That's how we view it. And we are also set up as a public benefit company committed to advancing these trusted data ecosystems as a public good. And we have identity application teams that build these trusted data networks that work for our clients. We have clients in a lot of different verticals. And they come to us for a lot of different things. But they usually start with the basics, which is decentralized identity 101. That's sort of how we term it. And we spend a lot of time talking to clients who come in and sort of have an idea of what this is about and have their toes in the water or have spent a fair amount of time in it, but still need to get some clarification and education on how to go out building this in the real world. And we've had great success at DCO because we've really focused on building things in the real world and putting things to use. And so I'm going to go and do a little bit of a history lesson and start with the why's. Why are we building this for our clients? Why are we here? Where do we start? So the trusted data ecosystem, as I said, is what we view as the minimum stand-alone unit of decentralized identity. And that consists of a credential issuer, a credential holder, and a credential verifier. And when we put something like this in action, we find that all three participants in the ecosystem start to derive some immediate value from the execution of this. It eliminates the cost of untrustworthy data. And I'll talk in a minute about the cost of untrustworthy data to all of the participants. So the trusted data ecosystem is our starting point and the very simplistic view, but very valuable view of how we exchange verifiable credentials. Why do we do it? Well, because we have a lack of trust in the existing data models as things are exchanged. Everybody probably has a LinkedIn account and everybody signed up for LinkedIn using an email address. And the question is, what did you do to get that email address? If you're like me, you went to Gmail and you signed up and you didn't prove anything about yourself before you got that Gmail address. And now you use it to validate your entrance into LinkedIn. And the exercise that we are engaged in in this model is a lack of trust exercise where we take things that have no backing and no support to them and use them at the next stage and the next stage and the next stage. So each time we sign in or log in or log in with Google or log in with Facebook or do any of these things, we're simply taking a cascade of mistrust or lack of trust or an absence of trust. It's clear that needs to change. It'd be great to know that somebody on LinkedIn was actually who they claimed to be. And our CEO, Heather likes to use the case that I believe there are at least seven of her on LinkedIn all using her same profile picture. So we have a problem with identifying individuals. The goal of decentralized identity now is to take that concept of trust and add in efficiency. So here's my history lesson for the day. When we go back in time to 3200 BC up until about 1964, we had an analog system of document exchange where we would hand carry things or we would put them in the mail. And we had a means of deciding whether or not to trust that information. We had seals from notaries. We had sealed envelopes. We had registered mail. We had someone walking into the office and pulling something out of their pocket and handing it to you. So I created a lot of trust between parties, but it was incredibly inefficient, of course. And it did have the benefit of being very private, privacy protection, protecting, because I could consent to taking something out of my pocket or going to the post office to mail it. What has happened since 1964 until now is what I call a hybrid world of trust, efficiency, and privacy. So we've sacrificed a whole lot of trust in the exchange of documents and data and identifying individuals for the sake of some efficiency. So we do things like upload documents and scan things and email them around. And we do gain some efficiency, but in doing that, of course, we have a large gap in the ability to trust what's sent to us or what we receive. And in doing so, we've centralized things and we've given up almost all of the privacy that we had in the past. The goal of the decentralized identity world is to optimize all three of those factors, to gain the maximum amount of trust with the maximum amount of efficiency and still protect the privacy of the users of the entities that are exchanging the data. And we think we can do that and we are doing that. So why does a document, identity document work in analog life? Well, it works because the person who receives it, this shop owner on the right, has a means to visually or somehow verify the source of the data. And they have a means to validate whether or not that data's been altered. So in the case of something simple like a driver's license, we can determine whether it's delaminated and we can read the source of it right on the top and we can look for holograms and things of that nature that say that it's real. And what that amounts to is that it's trustworthy and that trust comes from two general source buckets. It comes from the integrity of the data. So that's being able to identify whether it's real. In other words, has it arrived as issued or as created by the issuer? And then other way of putting it is has it been altered or tampered with? Did I take an exacto knife to it? Did I hack into it and alter a date or a name or a photo? So it's the integrity bucket and it's the authenticity bucket. So there's a means to identify the source of the data and verify that that source is something you trust. Does this driver's license come from where it claims to be from? If it was delaminated and you couldn't see the state name at the top, you might question that. So in the analog world, we have a really great way of identifying data integrity and data authenticity. We don't have that in the hybrid digital world but we can have that in the decentralized identity world. Right now in the hybrid world, we place trust in what I call representations and attestations of data. And that carries cost and risk to ask all of the questions. How do I know it's real? How do I know if it's been altered? How do I know if it's coming from where it claims to have come from? Really, 2FA with email is fine because we all know that email never gets hacked. So there are problems with that. And what this does is it forces a choice between trust and efficiency. If we get a document in this manner that's been scanned and emailed or uploaded, we have to go through some exercises or processes in order to decide whether to trust a document. And all of those processes are costly or inefficient or expensive or have privacy and GDPR concerns. And we decide, okay, we don't want to go through all those exercises. We need some efficiency. So in order to gain efficiency, we decide to take the risk and we trust those representations. And we don't do all the work that we may, we probably do the work we have to do but we may not do the work we want to do and we assume the risk for the sake of efficiency. And I look at it in this way that efficiency, trust and risk exist in a triangular relationship right now where you can choose one but in doing so you move away from the others or you move towards a different position on the triangle. So choosing trust right now moves you away from the efficiency side. And if you choose efficiency, then you go down to the bottom where there's more risk. The decentralized identity model using a trusted data ecosystem no longer forces a choice. It becomes a linear scale rather than a triangular relationship where risk exists on one side and trust and efficiency exist together on the other side. And my slides are going to click the next time, I promise. So trust exists within a trust model and this is what largely falls under the heading of governance. And it takes two forms. It takes the form of cryptographic trust. That's the code bases and all of the good things that we're gonna talk about here in just a minute that make this function and work. And of course, if all of that works as intended then we do have cryptographic trust. We have a means of going to a ledger or reading signatures and determining whether some data is authentic and whether or not it has integrity. But there's also the philosophical trust, the decision that the receiver of the data over here on the right needs to make as to whether or not the data they received comes from a trusted source. So that's a governance thing and it's a thing that we can actually execute in a machine readable way as well. As long as the shop owner on the right has made a determination that these things are trustworthy in these things or not or these sources are trustworthy in these sources or not. And then a trusted data ecosystem trust accumulates where we have a credential issue here on the left that's doing some trust exercises, some KYC work, some background checks. If we continue to use the driver's license example that the government agency does some work to determine who we are before they hand that over. And then they hand it over, issue a credential to the credential holder who then passes that credential or some attributes of that credential along to the credential verifier. And because they can trust it and they have their cryptographic assurance and their philosophical assurance that I trust the government, they've now got some data in their possession. And the credential issuer can now use that data that they received, I'm sorry, the credential verifier can use that data they received to turn it around and issue a derivative or a secondary credential or a completely separate credential based on the data because they've now been able to identify some positive or some attribute positively identified about the credential holder. And we can pass it along, start to attract each other in a way that a constellation might give birth to new stars. So once one ecosystem is actually deployed in the wild, we have an issuer and a holder and a verifier and they start to see value because they gain the efficiencies that I talked about that triangle becoming a linear scale and gaining trust and gaining efficiency all the while reducing risk. And so what we've found is a small deployment of a trusted data ecosystem starts to attract others to say, hey, I'd like to issue some credentials in this way too for this use case, whether it's healthcare or financial services or travel or any of the many things that we're working on. One issuer attracts others. Once a verifier is able to take some data in and trust it and execute a business function based on that data, very quickly other verifiers come in and say, hey, we want that efficiency too. We want that trust, we want that fraud production and we want to start to participate. So the single minimum viable deployment of a decentralized identity model is this TDE and it attracts participants. Once it's attracted. Scott, I think that one of the key points that you made there is that each of the participants is deriving some kind of value or benefit out of the system. And that's what makes the magic happen and attracts the other people in it's because they want some of the same magic. And if a holder sees I can pass through the airport in two minutes instead of a half an hour, they see a benefit. If a holder says I can open a bank account with less friction and I don't have to sit in the lobby of the bank for a half an hour, I can do it online and open in a bank account more rapidly than they see benefit. And each of the people in the ecosystem by driving benefit attracts other people who want that same type of benefit to the system or they see auxiliary benefits that might not have been initially anticipated with the minimum viable trusted data ecosystem. Yeah, that's an excellent point and worthy of restating in a number of different ways. You know, a great example in financial services is a government that might issue a credential to a resident within that jurisdiction such as a driver's license or even just based on their existence with tax returns or things like that. And so now they have some trusted information that they can go use and open a bank account instantly with. And great, okay, so now we've got the minimum viable setup where we have a government issuer and an individual as a government credential holder who opens a bank account here. And that's great. And now the bank suddenly realizes that now that we have them in our system, we can issue them our own credential and now they can go and log into our systems in a credential based setup rather than in a username and password setup and expedite all of that. And not only do we expedite it, but we lose all of those inefficiencies that surround usernames and passwords. So even the single verifier suddenly decides that they can move to the other side of the equation. And so once these start to exist, as long as they are proven to be interoperable and Ken will discuss that quite a bit more in a minute, we find that different ecosystems can and do attract each other. A lot the way a set of molecules might attract each other in a solution here. A holder in one ecosystem in a travel ecosystem may find themselves able to exchange a credential with a verifier in a financial ecosystem. Somebody, an issuer in a healthcare system may give something to a holder that they're gonna use in the travel ecosystem. And very quickly we find a lot of moving arrows and a lot of moving pieces. And our view of things is to set up these individual units, these trusted data ecosystems, so that they bring value, even if they exist entirely on their own in isolation, the participants have value. But of course what we found is this is the net effect that happens very quickly. And these are the building blocks that go into that. And I think I'll turn it over to Ken to talk some more about the actual technical stuff that we do. So the building blocks that we had in DCO used to help our customers around the world are the same building blocks that are available to all of you because it's an open source ecosystem as its foundation. Starting with the network at the bottom, the network is where the immutable data needs to be stored to establish the decentralized identifiers or dids of the issuers. The schemas that might be shared amongst the different participants. The credential definitions, basically the public keys, the equivalent of a public key for each of the fields in the schema that will be signed by an issuer and any revocation registry information to keep revifications up to date, accumulators and deltas to the tails files and things like that. But those four basic things being written to a ledger in a way that is kind of the foundational support of the whole system. In DCO operates its own networks. It operates a test net, a demo net and a production network. I think we have one of the largest Hyperledger indie based networks that are available for permissioned ride in public read. We are in five different continents with over 25 node operators. That's available in the test net and is typically for rolling out new versions of the software itself, the node and plenum code that's at the very basis and foundation of how a node operates in the network. The demo net is for places for people to test out their solutions on top of the network. And then the main net is for production use cases where interactions can take place. There are other networks, indie networks around the world in Europe, the United States and elsewhere that offer that type of functionality. Some, as you work with your customers you might discover that some of them require their own private network. For some governments, for instance, require that all of their data stay within their jurisdiction. And so they won't want to operate off of a worldwide network that's professionally maintained. They need their own network, no matter what the available solutions elsewhere might offer. That forms the basis of the network at the bottom. The validator nodes are the nodes participating in the consensus algorithm to arrive at what are the immutable objects that have been written to the network. The optional observer nodes are not used in practice yet. Those are to allow for if the network ever gets constrained in performance to be able to offer the ability for basically a read-only cache of the data to be available. Most operations are verifications, not creating a new issuer and registering their particulars on the network. The operations writing to the network are only to set up to be able to issue. Once you're starting to issue, those go directly from the issuer to the holder and don't have to involve updates to the network itself. But the optional observer nodes make performance to be able to be boosted beyond the limitations of just a set of validator nodes. Above that are the agents. That's where the fun and exciting stuff happens. Once you've selected a network to build on top of, you want to build your solution. And you need to address all three parts of a mini ecosystem, a minimum viable ecosystem in order to be happy and successful. If you don't have any issuers, you don't have any credentials in the system. If you don't have any holders, there's nobody to take the data and do anything useful with it. And if you don't have any verifiers, you're missing the actual derivation of primary value of a credential. Minimum viable ecosystems need to address all three. You don't have to have 12 issuers. You don't have to have a hundred verifiers. You need one of each at least to start the system out and get started. Once you get a system started, then the rest of the building out of that constellation will occur and more and more and more issuers and verifiers and holders will be attracted to the system. In our circumstances, we've found that as soon as an issuer or actually before we finish the first implementation of a minimum viable product, our customers start to see the value of how this works and other use cases that they can use it for. So it's really critical to get out there and start showing something. Even if it's not in public, you're showing it to the internal company and making sure that value is communicated to all the different departments and all the different partners of that company so that they can see where the value is going. The agents are what makes the magic happen. There are enterprise agents where it's built into software and on a back end fully automated. We see some of those. Those typically we base on top of the Aries community open source projects. ACAPI is an excellent open source project that is a cloud based agent. It's Aries cloud agent Python that allows for typically enterprises or end users to be represented in the ecosystem. Many of the agents will have a web UI to control them. There'll be a cloud agent instead of a pure enterprise agents on-premise that might be hosted in the cloud with a web UI to control it. The web UIs also have the capability of hosting the control modules that might be necessary for integration with existing systems. Most of these systems aren't written fresh from the ground up as a revolution. They're typically evolutionary and need to cooperate and participate with existing systems. That's where you're gonna have the least disruption to assist to an organization and add value to the organization without having them have to make a costly rip and replace type of a strategy. So you wanna pay attention to what the controllers look like, how do you integrate and obtain data to issue it and how do you integrate verifications into existing processes to make them smoother and more manageable. Edge agents, typically on a mobile device or laptop desktop type of orientation allow for the end users of a system to have an agent that represents them. Because mobile devices move around and don't have fixed endpoints, it's advantageous to have something like a mediator agent that provides a fixed endpoint and also provides caching and buffering of those messages that are being sent to an edge agent to allow that edge agent to participate in an exchange even if they might be offline. They can have the initial invitation arrive at their mediators, pick it up the next time they're connected and continue the conversation to go through either a message exchange, other type of protocols such as issuance or verification. The edge agents that we use are using a mediator. There's two that are fairly reasonably well supported, the ACAPI functioning as a mediator agent to provide that secure routing function or the Aries framework JavaScript is another one that we have used to provide a similar functionality. Our edge agents are based on, again, Aries and they operate in React native and it's the Byfold project. Also sometimes called the Aries framework, React native agent or something like that, something really complicated. We call it Byfold because it's simpler to refer to. That wallet provides the user interface to be able to exchange credentials to be issued them and present them in a user friendly way. Those are all open source projects that allow for the building of the custom or generic agents that will represent the businesses or organizations in your ecosystem. Many people neglect the agents that might be representing things. So IoT devices can participate. We have some clients that do that as well. Typically their needs are simpler. There's less UI interaction with an IoT device so it can be less complicated but they can either be on the device in the form of an agent that hosted there or they can interact with a cloud agent that represents them. That's kind of the top to bottom of all the different pieces that we use. And you need at least, like I said, one issuer holder mechanism and a verifier. The interesting thing about the data that's carried by verifiable credentials is that the PII stays off of a public ledger. It helps it stay under the end user's control. And by doing this, we have privacy by design and compliance protection. And the ledgers only use to verify the authenticity of the data as it was signed by the issuer and passed to the holder to make sure that nothing got altered and that it was really signed by somebody that you trust. But that parking the data in the holder's wallet allows for that interaction to occur in a way that's under their consent and control. Typically data is not stored in credentials right now. So they're an issuer agent, it might be stored inside of a database and that's a typical location defined source data. So looking at integrating with a controller and accommodating that data as its data source and being able to put into a verifiable credential allows for participation with the holder's consent and they become the transport mechanism. So instead of having to integrate directly between an issuer and a verifier and an N-way integration effort, you now have a standard for how to give the data to the subject of the credential to the holder and let them determine who they're going to share it with. One simple integration with a simple protocol allows for that to occur on both sides without having to do direct point-to-point integrations. It also preserves the user's privacy. The technology uses cryptography to accomplish all this magic. And a couple of years ago, I heard a number of complaints that the cryptography was black magic. I don't like black magic, I like white magic. White magic is magic that can be understood. I'm not a cryptographer myself, but I wanted to go through every computation that occurred in the Indie ecosystem. And so I had the cryptographers walk me through every step, every transformation, the entire line-by-line 143 steps that occurred in the actual computation of the cryptography for issuance and then for verification to see how the math worked and how to make sure that I had confidence that it would be available. There is documentation and it is published and whoever wants to dig that deep can dig right down to the bottom of the cryptography and figure out that it is indeed viable. It's been evaluated by other cryptographers, people who are smarter than I am about cryptography, but the math makes sense to me, the mechanism to make sense to me and that's how we get the signatures to make sure that the data is transmitted safely. Again, the issuer is going to use those cryptographic tools to do the signing using their did and the public keys that they use or the equivalent of a public key that they put on the ledger so that the verifier can verify that it was signed and done process properly. The credentials given to the issuer from the issuer to the holder, the holder because of the ZKP style zero knowledge proof mechanisms that are used in hyper ledger. The end user is able to do something called selective disclosure or predicate proofs to either obscure fields and reveal others or to put together a predicate like birth date greater than 19 years ago so that they can present a proof that they meet a certain requirement without having to reveal all the data. That's an important part of the cryptography that commenish Latensky signatures, the CL signatures that are used allow for that to happen. There are other signature styles that are compatible with W3C standards that don't allow for a selective disclosure. Those are great for cases where you're going to reveal all the data anyway or you want a simpler mechanism and you don't have the should spot to put the CL signatures in place to be able to accommodate that. The awesome part about the verifier is that they can cache most of the things that they're using to allow for fewer accesses to the internet required to do the verification. So if you're in a place where you have sketchy internet connection, you can still do an interaction between the holder and the verifier and still get that to occur in an offline manner or in a limited connectivity manner to be able to do the presentation of the thing. This diagram is taken from the CARDIA project where a medical issuer gives data to a holder who then takes it and presents it to a government or an official or an airline for verification and validation. Governments and medical communities don't always integrate well together. And so this is an awesome way to transport that data without a direct integration and allow for privacy to be preserved at the same time. Just a quick thing to add there when you were speaking about parsing out and selective disclosure is that so many of our clients are under extreme GDPR pressure to not retain or obtain any data that is not necessary to the execution of a business service. And that's where some of this magic happens is to take a credential that is quite robust and BP and may contain a lot of PII and a verifier can request and only obtain the pieces they need to do their piece of the business. And that really makes the legal teams happy and it makes the executive teams happy that they can take only what they need. It's important to balance out both the techie parts and the business parts of any type of solution that you're building. If you don't take both into consideration you're going to have a nasty surprise one way or the other. And so having that coordination between the business people and the legal people and the marketing people and the techie people makes for a happier outcome. Looking at the code architecture, the fundamental library at the bottom used to be all part of just Indie. And then it was split out as Hyperledger Ursa because it was deemed appropriate that it could be shared among multiple projects. The cryptography library has options for including all sorts of fun interesting things in your cryptographic suite. It allows for some Chinese algorithms to be included and for nations or places where that's required. Some people look at it and say, well, why did you do that? Why would you have algorithms that aren't universally trusted? It's because the different components of the cryptographic library may need to be altered to meet certain regulatory requirements and certain jurisdictions. For the Hyperledger Indie project, primarily we focused on the CL signatures. There are new signatures being evaluated and added to that cryptographic suite to allow for simpler signatures. ED 259, 25, 119, I can never get the numbers right. But there's a new signature that allows for a basic JSON LD type signature that just signs the entire package and turns it over. If you're gonna reveal all the data and there's only five fields anyway, then ZKP-style signatures might not make sense for that type of credential. There's also BBS Plus that's being explored. As those new signature types are added to the Hyperledger Ursa library, then they can be taken advantage of by the layers up above. On the left, on the green bits, those are the bits that form the ledger itself. There are currently a consensus algorithm, the Indie plenum code. That's the Byzantine fault tolerant algorithm that keeps the ledger talking in sync so that everybody agrees on what was written to the ledger and it can't be tampered with. The Indie node is the piece that receives the transaction and actually evaluates and writes it or reads a transaction from the ledger to produce it back out to anybody who needs to consume it. There are also Hyperledger Indie plugins. There's a experimental token plugin, for instance, that allows for tokens to be used on a network for either token network fees or other types of value exchange. Their plugins can be allowed to extend the functionality of the network without having to have everybody take the change by forcing it into the node or the plenum code. On the right are the stacks that sit slightly higher above the node code to allow for applications to be written. The Indie resolver takes the dids that are basically addresses. So they'll take a did method and resolve it and figure out where to go on the ledger to get that particular information. The SDK above that is a form of packaging to make those calls a little easier. The agent code itself sits above that and a typical agent code, like I said, is cloud agent Python or Akapai or the bifold project built on top of the Aries framework JavaScript are the ones that we favor. There are other frameworks as well. There's one for Go, there's one for Java, there's different frameworks that can be used that are all interoperable because they maintain the same protocols and standards for interacting. The enterprise and mobile apps that sit on the very top are the places where the customization to make it work with your community take place. You need to say, if you had to start somewhere, start at the top and work your way down. Don't try to start at the bottom and work your way up. Figure out how you're going to take some existing agents and codes and samples to modify to fit your particular vertical. The Cardia project that Sita and DCO worked on and contributed back to the Linux Foundation Public Health built on this same technique. It took the cloud agents, Aries cloud agent Python to represent issuers and government agencies and verifiers and built pieces on top of that, custom UIs to handle the interactions and for the maintenance of the credentials that were going to be issued. It took the bifold project and built on top of that to customize the user and user experience and also customize the same bifold project to create a mobile verifier for small businesses to use in verification once a traveler had arrived in country. That complete ecosystem preserves the privacy and health data by having both a primary credential, a vaccination, a test result or a vaccine exemption credential that is then presented to a government agent. The government agent evaluates it according to the machine readable governance determines whether it meets the rules or not and then issues a type of derivative credential which is basically a subset of data that says, here's your name and some identifier to help associate it with you and then whether or not the governments approved you for entry into their country. So that's called a trusted traveler credential and that trusted traveler credential can be used with the local restaurant, casino, hotel, sports venue that sort of thing to determine whether or not the governments approved you according to their rules and removes the liability of holding that healthcare data from all of those verifiers and also preserves the privacy of the passenger or the traveler. So pretty cool outcome there as well. So I think at the end here before we take questions we wanted to tackle some of the marriage as Ken mentioned of technology and business strategy in this sort of trusted ecosystem deployment and our philosophy at NDCO has always been to start small and start with that minimum viable unit the trusted data ecosystem with one issuer, one holder and one verifier and understand that when you package it up that way you can derive some immediate value from that even very small deployment and the ease with which it integrates with existing systems without having to do all of the complicated integration. So it's a phased agile approach that we take and we've got to be very successful. So when you go pondering, how can we use this in our world, in our business start small it's a grassroots approach that we take and it's born a lot of fruit already. We've seen a lot of success also moving from something very simple, a prototype then a smaller trial than production and then scaling and that allows for discovery of issues that need to be resolved early rather than later in the process and has resulted in more successful outcomes. Yeah, certainly starting with the top down Bill Grom and a day approach is you do a lot more problem solving before you ever derive any value out of the system. So it's an evolutionary approach that we like to take. And so when we have or put TDE's Trusted Data Ecosystems out with our clients there's a business side of things that we provide and technical side and we've discussed both of those things but you get everything from the UI, UX and workflow designs, architecture, governance, marketing and of course all of the agents and applications and the network to power it that can discuss. So this is the foundation of a Trusted Data Ecosystem are we feel that it can't exist and provide value without all of these components in that same package? One of the things that trips some people up is that there's a difference between governance of a network what are the rules and operations for working with a network and what are the rules and operations for the actual issuers and verifiers in a system? If you're using, we advocate using machine readable governance for the operation of the day to day business operations because it can be uploaded and dynamically pulled into the agents and operated on. That way you can take things like bad actors. So if an issuer is determined to have gone sour or gotten hacked, you can remove them from the governance or replace their Trusted Issuer with a new Trusted Issuer ID and deactivate basically all of the old credentials and give them a new chance to start over in the ecosystem or remove them from the ecosystem if it's malicious action. I think that's one of the interesting things so the governance bits of how to do the day to day who's trusted, who's not trusted what are the rules for presentations are covered in machine readable governance the operation of a network has its own governance and they're typically external rules that determine that there's also probably a written governance that goes for operating a mini ecosystem to describe behavior, Trusted Behavior and so forth that then gets translated into that machine readable bit. And well, that's a small sampling of the different verticals and sectors that we're doing things doing work with. And so at that point, I think we're nearing the end of our time, so maybe it's Q&A or quiz time. Yeah, definitely time for quiz Scott. So let's take a look here and we've got some great questions that have come through the chat. So I'll go ahead and address those with you first. And then what I'd like to do is I have a few questions that I have from your wonderful presentation and I'll just go through those as well. So Michael, I know you had about two or three questions here and I think Ken and Scott did a great job of answering some of those. But maybe if you just wanna come off mute and just talk a little bit about your couple of questions there, that would be great. Yeah, thank you. And I will be respectful of the people's time as well. So one of the questions I had you partially answered is assume that I set up a TD fantastic for whatever reason though my issuer or one of the components becomes bad. I go out of business, something happens to me. The credential that I've received from them would still be valid because as we know, it's just a validation that it was good at this time. It has no data on that credential. But then if I come back in a day or a week and say, hi, here's my credential, I assume there's some way to track. Oh yeah, on this date, John turned out to be a bad actor, we took him out and either gave him a new position as a new issuer or he was replaced by Scott, can you walk me? So there's a couple of interesting things when you find credentials, normally errors occur even with good issuers. They need to revoke individual credentials. There's also the case where you need to take all of the credentials out and you can do that with the machine readable governance. You just delete their entry and they're no longer trusted. Typically a more common scenario would be that there's a hack or some kind of an event where they need to, they can, the issuer themselves can revoke a whole bunch of credentials and then come back online or you can say that agent was corrupted. We don't trust anything that came from that particular agent and you can re-instantiate them with a different did and a credential definition. So can at that point though, at that point though, yes. So at that point, you've determined I'm a bad actor now. Do you, does anyone have the ability to push out notifications to all those people that, hey, you went to Ken, he's now a bad actor, you need to do something about it or is it just something I'm gonna find out on my own the next time I try and use that credential? Well, hopefully your agent can be updated when the machine readable governance changes and you can notify you directly. You can also, if you have a more reasonable person who is trying to replace the credentials, they can send a message because they do have a connection and they know your did because they issued it to you and they can notify you. That's a normal business process, I think. There are some weird ones that are more obscure but the most common business use cases are all well-established and can be covered. All right, thank you. Okay, the next question I'm gonna jump to unless Michael, do you have anything else that he's yet covered? I do, but ask other people and then if there's still time we'll talk. We'll circle back around to you, that's great. Hey, Dar, do you wanna ask you a question about multi-ledger support? Yes, great. Yes, thank you so much for the information support. So I'm familiar with Vindicio and I just wanted to see like how can we expand that vision even further? So this secure is currently one of your node operator and the also wallet that compatible with your system is my previous startup company. For my current startup, we're building something where we're expanding on the possibility of using something like, maybe it could be like did a friendly method like for example, the ceramic network and their XID and there's other basically implementations where we can use multiple kind of nodes instead of just ND on the ledger side and just the possibility would be also like that the basically the use of token and the fees and so on. So that's just, just would like to know if there's any, if you have any thoughts on that. You brought up an interesting topic that we could probably spend a half an hour on but let's try to cover some of the basics really quickly. So right now we're at single ledger support kind of is the norm and it can support one ND ledger at a time. The Cardio project on our interop next interopathon we're trying to focus and look to multiple ND ledgers at the same time. That I think will be pushed along happily by the did ND method instead of did sovereign. So that would, that will promote that. There are also efforts to increase the ability to resolve first credentials that come from different ledgers that are also W3C compatible to that for all the basic verifiable credentials model data model, although they might have different ledgers that they support different types of ledgers like Ethereum or Bitcoin or whatever different other types of ledgers. That's a second approach to build that into the Aries agents so that they have that ability to resolve those issuing multiple types of credentials and different types of signatures is also on the roadmap for the Aries community. So that will help with that interoperability level. There's also some effort done to take the work that's done at Hyperledger for the ND traditional types and use that to support other types of ledgers that have for instance, the track supply chain or the track transactions or financial dealings and allow the identity portion of the ND network to support the other ledgers in providing identities for them in use for strong identity inside of a different type of a blockchain. So multiple blockchain support is something on the road. Look for it at an agent near you soon. PDR is probably the beginnings of support for that. So I expect to see it happen. Awesome, thank you. Okay, perfect, thanks. Andrew, I'm gonna jump to you next. If you wanna come off mute and talk about Aries Byfold, that'd be great. Yeah, sounds good. Great presentation. Thanks very much for all the information. I'm interested in Aries Byfold and using that as a basis to build a mobile agent. And I know that in DCO was, didn't you start it off with your code base? And so I was wondering if you are, have any inside knowledge on the development, the open source development, if you think something like a backup to the cloud would be included and a way to restore a wallet if a person lost their phone, for example. So those are both good questions as well. Inside knowledge is not something particular to an open source project because it is mostly controlled by community meetings. Because we do have a number of maintainers on the project, we do have some say in getting things done, mostly because we contribute the effort. But those are both interesting topics. So backup and restore of wallets is one thing. Wallet migration is a second topic where you can take it from, for instance, the bifold wallet and import it into the intrinsic wallet or something like that. So those are two interesting topics and I both think they're in the cards for the future. Exact dates are something that in open source projects are not always nailed down. But those are topics of interest and will be addressed, I'm pretty sure. Okay, great, thanks a lot. Okay, next, I'm gonna jump to Cliff. Why don't you weigh in on kind of the offerings and the reference use cases? If you wanna come off mute and just ask your question, that'd be great. Sure, thanks, John. Can you hear me? Yeah, we can hear you perfect. Okay, cool. Thanks for the brilliant presentation. Just wanted to know a bit more about the service offering from in the issue. And do we have any successful reference case built somewhere which can be introduced a bit in this session? Thanks. Scott, you wanna handle that one? It works better when I'm not on mute. Yeah, absolutely. So we have a, I have to think carefully because we do have a number of clients developing use cases that are still top secret in their eyes. So the high profile one that we did was with CEDA, the Global Airline Travel Consortium and the Nation of Aruba where we, and that was the origins and source of the CARDIA code. So that was taking a health credential from a healthcare provider using it to send to the government of Aruba where they did a verification, checked the health status, made the determination that a traveler was COVID free and healthy. And then as I described in my portion there, the government turned around and went from being a verifier to becoming an issuer and issued the traveler a derivative credential, a trusted traveler credential that they could share with other verifiers on the island and protect privacy. So that's the highest profile one that we've done. And then Ken, perhaps you can talk a little bit more about what we're doing with Liquid Avatar because I know that's been in the news lately. So Liquid Avatar is primarily focused on financial services market and they provide identity credentials. They've also, since the CARDIA project was announced, they've decided to become compatible with that. They used I think the enterprise agent that's the part of the open source CARDIA project code as a basis for their issuer to try to support healthcare credentials, but they had their very own wallet that they wrote that is a cloud hosted custodial wallet controlled by a mobile app. So it's a little bit different spin. They did not use the by-fold type of approach. They wanted to have something that was hosted in the cloud to allow for that to occur. So also compatible with all of the Aries agents and interactive that way. And their focus is on getting credentials in the financial world and working with both establishing strong credentials for the end users, free of charge the end user and then strong verification and tying that in with existing payment methods is kind of the direction that they're headed to try to provide value added services to that sector of the market. And they've gone live, I think, with their agent. I hope they have gone live with their agent because I think that the announcement is gone out. So I think that one's safe to talk about. They've talked about their ecosystem in the press. And so I think it's public knowledge. So we can probably talk about a few more but I know we want to be conscious of the time. Yeah, no problem. I think we got a lot of great questions here. Carl had one about cellular IOT modules. Carl, if you wanna ask that, that'd be great. Yeah, so hello, everybody. I'm too shy, I don't have my face there. Anyway, I came in midway so I didn't catch all your presentation. I'm fascinated, it's great stuff. So, you know, if we're talking about blockchain and we're talking about the integration of all of these things, then that means you need to integrate the IOT world which is the physical world with the blockchain which is on the digital side with a digital twin and you need a digital footprint, timestamp, thumb, fingerprint, et cetera, et cetera. This is kind of what we're doing. So your colleagues have actually reached out to me as I'm on your webinar. That was really, really fast, fantastic. But I see lots of synergies but my question is, because we're providing trusted identity authentication and data privacy protection as well. So I'm just trying to find where the synergies are with what we do and with what you do. That's why I asked this question. So IOT devices are very quite a bit in terms of their capabilities. So some of them are pretty stupid. Other devices don't have any smarts in them at all. They're inanimate objects that have no electronics. So you absolutely need a digital twin for an inanimate object, right? Lower capability IOT devices might need an agent that's cloud hosted for them as well. Some of the smarter drones and things like that might have the agent built into the IOT device itself and be able to interact directly as just like a human would with a cell phone, right? So there's kind of a spectrum of use cases. I think in the end the IOT devices will outnumber us and should be a very important part of consideration for how the devices can interact in a similar way to establish trust and both be issuers of verifiable data that might be useful in interactions between the IOT devices themselves. If you imagine a drone delivering a package it may need to interact with the package itself that has its digital twin. It may need to interact with airspace control. It may need to interact with buildings or a company receiving the package. So there's a lot of different interactions that can occur, some of which can be fully automated and some which may require some level of human intervention or approval to allow that to occur. IOT devices that are sources of data only, monitors on sensors and those sorts of things might need to just be issuers of credentials that can be aggregated or relied on as a chain of data that can be trusted and known that it has been tampered with. So there's a number of interesting bits of establishing an identity for an IOT device and then what do you do with the credentials that are either issued to it so it can perform certain actions or the credentials that it issues that may be consumed by others, but very interesting use case. Ken, I'm hoping that you and I can connect one of your colleagues, very sharp guy, James has connected with me, but you and I are speaking the same language because we actually deal with all the, what we're dealing with nine IOT module vendors right now and these exact same questions about how do you get the trusted identity in a decentralized fashion and adding the identity to various things. For example, like Tesla vehicles and connected cars, et cetera, et cetera. So I think you and I are speaking the same language really here. So I'm hoping that you and I can chat down the road. I'd love to send some information to you. Great presentation, you certainly know your stuff, sir. Yeah, there's other fun topics that come up that are related to people and IOT devices about establish a discovery. How do you find people in organizations and IOT devices? How do you discover and make connections with them? What are the authority mechanisms that are in place? Some hierarchies require authority. FAA, for instance, has authority over their spaces and then regional controllers and so forth. Other things are more had hoc and may be centered around a business or a vertical. So those mechanisms are also really interesting topics that again could take quite a bit of time to discuss thoroughly. I think we've also seen some interesting use cases where a device that's on the dumber end of the spectrum as you described can act as a verifier for a human that needs to interact with that. So that where a human would present a credential to that device and then using the did come security, know that that's the person who's authorized or that has that access level granted to it. So really interesting stuff with inanimate objects. Yeah, one other thing I wanted to mention, which is that we see AI now going at the edge right into the module because when you talk about cloud and you talk about thousands of devices, man, that gets to be expensive. So it's better to slice and dice the data, refine it at the edge before you reach the cloud. And that's what we see going on with module vendors. So it's pre-cloud that they're adding the AI and it's in the module and it's at the edge of the sensor collection. This is what we're seeing more and more. Well, that follows the same line of reasoning for decentralized identity as well, where the burden of computation is not in a centralized system. It's distributed to all the agents that are interacting themselves. And so the line of thinking is similar. Thank you, sir. You're welcome. Yeah, thanks, Carl. Great questions. And so we've hit the top of the hour here, but we have two more questions. So Ken and Scott, if you're open for it, we'll entertain two more questions and then we'll call it a wrap. Sure. So Sam, you've had your hand up for quite a while. If you wanna come off mute and ask your question, that'd be great. I apologize, I did not intend to have my hand up. Okay, perfect. Okay, well, I'm gonna move on to the question that Erica had asked about. And I think she asks a very relevant question here is really if a bad actor gets a record whether that's either as an issuer or a verifier, is there any way to have that documented so then they can be scrubbed out from getting back to be either a verifier or an issuer? So the bad actor question is typically dealt with at the governance level that's one level, it's the people level above the organization. It's the, how does the ecosystem define itself and its rules and how do they take care of introducing new trusted issuers and verifiers or holders into the equation and how do they remove people who are behaving poorly and what kind of remediation is involved? If for instance, a grievance is filed, how do you determine whether or not it's legitimate? What are the rules for kicking people out and bringing them back in? The machine readable governance helps make the updates to that type of governance. So for instance, an issuer has been removed from the system immediately enforceable or available for evaluation by the algorithms that can determine what's a safe action and what's not a safe action. But there's two different levels, the organizational level above and then the actual machine readable governance that helps with informing the agents so that the decisions are made at a proper level and distributed when a decision is made that everybody becomes informed and can take the proper actions. Kind of a roundabout answer, but you've got to deal with it in at least those two different levels. So how do the organizations running the whole network deal with it? And then how do they inform all the participants so that bad actors are now no longer recognized as recommended participants? There's nothing that prevents in machine readable governance an agent from interacting outside of the governance rules. The governance rules, they're to help protect the interactions and flag things that might be dangerous, but they shouldn't override a person's ability to do some type of an ad hoc interaction in most cases. In some cases like national security issues or something like that, they might restrict the rules and the agents to only obey the rules that are enforced at the time. But most instances, like if you're setting up a loyalty program for your customers, you shouldn't prohibit them from interacting in ways that were unanticipated just because you didn't think of them. But you should be flagging the things that might be bad behavior or not a trusted source or a trusted verifier. I was gonna add that I think that's where the machine readable governance provides an agility to make those ad hoc decisions because those unanticipated uses while some of them may be on the negative side of the spectrum, others may provide some unintended or unrealized value that wasn't there at the beginning of the ecosystem development. So the machine readable governance really is key to realizing those unrealized values and also to be able to remove those. Perfect. Scott and Ken, I really wanna thank both of you for joining the session today. And anyone that has any questions, both Ken and Scott have posted their emails in the slide presentation there. And then this is gonna go out on the Hyperledger YouTube channel. So check it out if you missed anything as far as any portion of this presentation. And I hope everyone has a wonderful day and we look forward to the next Hyperledger meetup coming up. Thanks everyone. Thank you. I guess I'll ask my question and email. Okay, have a wonderful day everyone. Take care. Thank you. Thank you. John Carpenter? Yes. Just so you know, there were two Sam's, one who did not have his hand raised and spoke on the other person whose hand raised, raised his behalf. So. Okay, Sam, and I apologize for not getting to you, but if you wanna quickly ask your question while we have Ken on, feel free to do so. Yeah, that'd be great. So there were questions about how you, what sort of data you could produce ZKPs over. So proofs around. And I was wondering, is there any standard that you know of right now that can take a some sort of coordinates reference and identify that the coordinates are within a scope of a boundary of a geographic area, such as a whole bunch of different proofs of perhaps, you know, GPS coordinates of a residence and being able to prove that all of those are within the boundary of a particular municipality. And then that would just municipalities within the boundary of a particular, you know, county and so on and so forth. That's a bit awkward using the current technology. I don't know of anybody who's pulled that off yet. It's an interesting problem. So simple rectangular boundaries could be done with predicate proofs, but when you start doing, what's that? It'd be complex polygons, obviously. Yeah, so. Yeah, it'd be a lot of complex polygons in order to pull it off and you could do it, but I haven't seen anybody that's done anything beyond rudimentary rectangular type coordinates for distance within type of computations. The computations involved in seeing if a set of lat-long coordinates are inside of a particular location could be done, but I don't think that the predicate proofs are up to that particular challenge yet because you have to tailor it to a particular geographies. So you could, as long as you know the geography of the thing you're going to do, you could make a very complex predicate proof that would satisfy that, but it would be really, really difficult to codify that for multiple geographies and make it more universally useful. If you had a data standard in which is being used for reference for computation to determine whether or not these points fall within this geographic boundary and the code involved in that computational process was deterministic and was sort of able to have its source, but the code and the source verified would a computational proof potentially be a solution to that problem rather than traditional ZKPs. Yeah, the approach I've seen most people use for that your particular problem is the trusted execution environment methodology and it's basically sending it out to a third party, hopefully in a trusted way, communicating that data securely and then trusting that the computation done by the third party is accurate and then getting a return result. I haven't seen anybody wedge that into a nice neat ZKP style predicate proof to achieve your goal. It's an interesting challenge though. Yeah, would it theoretically be able, would you be able to prove that particular code, computational code and the comparison data that's being referenced by that computational code, say it was being run in a trusted execution environment, would you prove that the source of the code is valid, the actual hash of the code represented from its source is identical to the code that was actually executed in the TPM and then the proof is sort of as a result provably computed from that code? Yeah, I've seen people do that with basically assigning a credential to the algorithm, to the code itself or to the data set, that's the reference data set but you're still trusting the trusted data and execution environment to accurately produce that and then there's a result that you trust. It's a good line of thinking and it's I think an appropriate way to solve the problem in the present, not in the future so that that could be done and then a credential issued by the trusted data, trusted execution environment to say that I performed the computation and here's my credential that I have a certified set of code running here and I have a certified data set as a reference that I'm operating against. So that's one mechanism I think that you could use to accomplish that. Interesting problem. Yeah, thanks that made me think of different ways to approach this, thanks. Great, good to meet you. Good to meet you, thanks. Thank you, John, thanks for hosting and glad that we could be here and participate. Yeah, you did a wonderful job, Ken, I really appreciate it and I hope you have a great rest of your day, I'm gonna close out the video now. Okay, thank you so much, bye-bye.