 And we are now streaming on YouTube and we'll get started in one minute. And I'll turn over sharing to, okay, and do you see a team? You've got it. Hey, I'll pick off. My name is Trevor Butthworth. I'm just going to be very quick and get things started. We're very excited to bring you this workshop today. Thank you to everyone for joining. We have a great group of instructors to talk you through the hyper ledger projects that are currently being used all across the globe to build decentralized identity solutions. The work of the open source community is super important here, because more and more enterprises and governments are requesting open source solutions. For identity and verifiable data issues. Today we were being joined by some of the people who've built, who've been there from the very beginning to build these open source code bases. So you'll get the insight and their experience and I encourage you to use the discord channel to ask questions as David has already indicated. You can also ask them in zoom but the zoom chat panel isn't going to be permanent, whereas discord is forever, so to speak. So thanks for joining today's workshop. You should have received some instructions. If you haven't done that, or you haven't got them, don't worry. You can follow along and as our instructors run the demonstrations. Today's workshop will be recorded and available on the hyper ledger foundation website in the training section, along with instructions and links to all of today's materials. So be free to access this at your convenience. So we hope this is going to be a great resource for you to get up and start building your own digital identity solutions and with that, I'm going to hand it off to Scott Harris who will be leading the first session of today's workshop. Thank you Trevor. I'm apologizing for a little bit of screen sharing. Tech issues there so we're going to just look at it this way. Let me make sure everybody's seeing our title slide here build your solution. We can see it. Excellent so we'll just move through it this way. That is James Schulte who was here last time and director of business development and somebody can reach out to for questions amongst others on our staff here. We have a link to the workshop materials that we will share in the in the chat as well and at the end of the session also so if you would like to review the materials presented that will be available to you as well. And, Trevor, do you may know is our chat is active for here on hyper ledger during the, during the workshop. Yes, it is. Excellent. Okay, so feel free to jump in there. My role today is to talk at a very high level about what decentralized identity is how we approach some of the concepts and do a level set on how we think about things and some of the terms and definitions that we use and that you'll hear throughout the day so for those of you that are intimately familiar. We hope it's refreshing and perhaps a slightly different look at things. And if you're just coming into this sector in this industry. We'd like to get you started down the right path and thinking about this in a way that's useful and productive so I am VP of business operations here in DCO. I do many, many things on one of the, one of the things is education along these lines and one of them is problem solving at the client level and working through UI UX and workflows and governance challenges and things of that nature. So, I talked about this quite bit so let's jump right in. This is a very, very old cartoon that describes some of the problems we have with the interactions and relationships that are digital that happened across the distance via the internet and anonymity of the internet can be a great thing it's a can be a great way to protect privacy and and do things without revealing a whole lot about yourself. However, the downside to that if you're looking to execute some business function is that we often need to know who's on the other end of a transaction, both for the businesses but also for the end users transacting with the business. So you'd like to know for certain that when you send a payment or you submit some PII somewhere that you're submitting it to the, the place that you think you're submitting it to. So anonymity is a double edged sword and we need a tool or a technology to solve for both things something that can positively identify the participants in a transaction or verify some data points, but also protect privacy we're applicable and we're needed to help protect everyone's individual data rights. That's what this technology is really intended and evolved to do. We have identity systems to establish trust between parties. And if you have a LinkedIn profile you might have signed up for it, using Gmail address and you signed up for that Gmail address. And doing absolutely nothing to prove who you were and therefore we we are in a system here where, despite all the two FAs and three FAs and things of that nature. The original root of our entry into a system such as this is not supported by really anything whatsoever so it's very easy to pretend to be someone you're not. And we intend to solve for that. Let me get myself down there. So what is identity anyway we'll try to talk about the second part of our technology decentralized identity. And typically when you ask someone what identity is they will give you these sorts of answers, a person's name or a date of birth. Perhaps something about their life or their educational credentials or something like that. So biographic and demographic data is typically how folks define it. And that is a very, very useful set of information for transacting things digitally. It's not the be all end all of what decentralized identity is, and we started to refer to this type of identity the biographic and demographic with lowercase I, and certainly the a bulk of the use cases that have been built out on this technology to date have dealt with these very things biographic and demographic data points. Where do these data points come from the lowercase I identity well they can come from anywhere. Not just your typical driver's license or passport type of things or your employer even they can come from interactions that you have anywhere you've established some footprint or digital trail of yourself. And the data point that can be used to prove something about yourself can constitute part of your identity as long as that data point is valuable to the party that's receiving it if they've determined that receiving this data point will help them establish trust between yourself and that other entity then it can be used to prove something about yourself so we start to see that your identity can become well almost anything that you and some other party agree that it is or it needs to and then we expand our view just a little bit further. We have identity with a capital I which is that any data point that can be valuable to someone who receives it and can be shared with consent, and that's a subset of potential identity which is any conceivable data point. So when we widen our view far enough, we see that identity, as it's used in our decentralized identity moniker can really be absolutely anything that you want it to be. If that data point is valuable and useful to someone who receives it. Then it can constitute part of an identity, and the key of course is validating or verifying that data point for certain features in order to be able to trust that data point so rather than things being rooted in no verification or no validation and no trust like they were with your email addresses, we can use this technology to prove the validity of these data points. So, when we look at identity as being everything. We can widen our view a little bit further away from individuals and out to things like businesses or governments or operational entities, devices and iot things and physical objects have identities as well so if you think about a vehicle with the number or license number or registration maintenance records business with all of their their financial reports and business records and employment records. All of those are simply subsets of this potential identity circle. Any data point that might be valuable to someone that receives it can constitute an identity and then once we use this technology we'll talk about today to package it up as a verifiable credential and send it on its way. It can smooth business processes and operations to the point where it becomes very very valuable. So we'll talk for a second about decentralization in our pieces of pie that constituted someone's identity. All of those things I showed earlier as touching each other, and that's a bit like the world we live in today where you might use a an establishment of trust let's say a Google account to log into something else or a Facebook account to log into something else. And, and all of these things touch each other or you may submit some information to an entity who then goes sort of behind your back and checks it against some other entities database and they're talking to each other. Probably with your consent, maybe you're maybe not, but that's a question for the legal experts but the problem with that centralization is that the state of data is being handled sort of around and about you, but not without not with your explicit participation always. And in that sense, it's difficult to track consent compliance, and it's difficult to have transparency and in the GDPR world that we live in today, even if you're in the US, there's, there are problems with that certainly for businesses. The other big problem for business here is that integration of systems that's required to do this central centralized model are very, very costly and they're difficult, and they sometimes compromise privacy. And of course there are security issues so you know when someone gets into one system they may be linked to to dozens of others and gain access so centralization has a lot of issues. And that could be solved for with some some appropriate technology. What we're striving for with a decentralized model is essentially a digital model of the analog world wherein you might be given some set of data data about you. In this case, we'll look at a driver's license because it's an easy to conceive of example, you're given a set of data, and you hold on to it and when it's time to show that to someone you take it out of your pocket so to speak and you show it with it back in your pocket and it's still yours to hold and carry around, rather than things being shared and moved around for you in essence. So we'll take a second for history lesson here and look at trust in the past and trust now because we've talked about trust systems and identification systems in the analog world. From the time of the first recorded writing to about 1964 we had physical objects and physical objects have a means of determining whether they're trustworthy if you think about a driver's license or a passport a birth certificate, even a registered or notarized letter, they've got some means whereby when someone receives them. They can look at this thing and decide whether it's trustworthy they can look for the eraser they can look for the whiteout they can look for alterations and things of that nature and they can look at the special paper holograms or things of that nature to say hey this thing is real, and therefore I can trust it and move you forward in the process that I'm trying to move you through. To about 1964 when the first facts is sent and now we've got someone taking a real object putting it on a screen, sending it digitally across space and time. And it's being received on the other end in an output format that is not the real thing it's, well it's a fight simile of the real thing, and yet the receiver of that treats it as if it's real and in doing so, they assume a lot of risk. And even some risk is more than they assumed before when they saw the real object. And if they don't want to assume the risk they've got to do a lot of work to mitigate that risk to go and actually talk to the source of that original document and find out if it's real. And so in the hybrid world we've actually reduced the amount of trust in an exchange of information or identity credentials if you will. And it is more efficient, certainly, at least in some aspects of the exchange, and it's definitely sacrificing privacy and control for the individual that holds those things. In the decentralized world we are able to maximize trust maximize efficiency and privacy by using the tools and technology that you'll hear about a little bit later on today. So why does it work in analog life. Well much, much in the way I described we have trusted issuers who give you a physical credential and then you hold them and the verifier of that can visually check whether the thing is real or whether it's been altered. And then they can make a decision about whether or not to trust that document. So if you upload a JPEG or a PDF to apply for a mortgage or open a bank account or travel or any of those things. The entity that receives it is just getting JPEG that you could have certainly created in Photoshop. And now they've got to do a lot of work to decide whether it's trustworthy so we're going to solve for that. And that's what I just described here there's all that work and all of those questions that arise when you upload something or email it or otherwise transmitted digitally in the in the methods that we have in use today. So what is trust in data well trust comes really in three parts one is authenticity so if you receive some data. And whether it's authentic in other words where did it come from and does it come from the place it claims to be from so we can use the ledger to support claims of authenticity and verify that something did come from the place it claims to have come from in other words you didn't make it in your basement. We have also data integrity which is about whether or not that data has been altered so if you have a driver's license for example, and the physical thing has some lamination and if it's delaminated and has cut marks in it it's probably evidence that you've tampered with it in one way or another. The PDF upload of your driver's license doesn't have easily identifiable integrity marks in that way so we can provide data integrity with this technology as well and evidence that something has arrived intact as issued. And the third part is usefulness. So this is more of a governance and and decision making and business logic question however it's really the essential part of it. So for someone who receives some credential data. They need to have a contractual and conceptual understanding about the issuer of that credential and understand whether or not they want to trust it. Another example here is if I transmitted driver's license via a credential like this, and you can confirm that it came from the licensing organization, then it can be very useful to go rent a car. If I give you a driver's license that came from the local coffee shop, who could potentially issue you a driver's license using this technology. I need to understand that it came from the coffee shop and not the driver's licensing entity and therefore not to hand you the keys to the rental car. So that's one of the things we really focus on here is usefulness and trust decisions and those are governance things and we'll talk a bit about how we execute those things from from the tech point of view. The trust model consists of two parts. It's the cryptographic trust so much of the stuff that you'll hear about today is cryptographic trust is talking about how we use the ledger and use the hyper ledger tools to support claims of authenticity and integrity. And that's pretty straightforward most organizations will trust that if you can adequately explain it and articulate how it works. They'll say okay I do trust the tech and then the philosophical trust is what I just described it's hey I essentially will give you the keys to the car for a real driver's license but I won't give them to you for a driver's license that came from Joe's basement driver's license organization. And it's certainly possible for Joe to go and issue driver's licenses. And so we need a mechanism for the rental car agency to decide whether Joe is trustworthy or not for for giving you the keys. We'll talk quite a bit about credentials in this format and it's a very simple look at it but at a DC we approach all of our problem solving in this way we look at issuers of credentials holders of credentials and verifiers of credentials and then of course their relationship to the ledger and the and the networks that that support these and they're a lot more on that of course later today. The key to this though is that we write things to the ledger to support data authenticity and integrity, but we don't We only use these ledger based items or objects to support those those claims and those proofs. The data itself goes from the issuer who has it normally, and directly to the credential holder who has a legal right to have it and then with consent goes to the data supplier. So again an important part here point here is that no personal data, no PII, no credential data, whether it's PII or not ever hits the ledger itself. And of course the question to trust or not to trust once you receive it. I'm going to skip through these use case examples a little bit in the interest of time. I've talked a little bit about it. So here that we will speak up today. The vocabulary uses one of agents so we'll have an issuer agent and verifier agent that can be stood up in a number of different ways as as cloud hosted or even on premises type of instances, often with the web UI associated with We'll talk about edge agents which are mobile devices, laptops things that come and go things that go into airplane mode or lose battery or just get turned off. Typically we would use edge agents for holders devices and for people to hold credentials, or sometimes for mobile verifier agents. And of course all of these talk to talk to the ledger in their own way. And then we'll have mediator agents as well which acted basically as an escrow to hold these credential exchanges and messaging between agents in the case where these edge things are offline. I've used these words before but just a quick level set issuers are those that create and package a verifiable credential and issue it to someone or something that's going to hold the credentials and again a reminder that the someone even though it's a person up here might be a business entity might be government. It could even be a custodial holder where by you're the business that holds the credential that actually belongs to a car that your business owns or something of that nature. And then we'll talk about verifiers verifiers are those that receive the credential data or the credential itself and then check against the ledger for its authenticity and integrity and then make trust decisions. We use the sort of typical Bob and Alice names that you'll see over and over so in most of our conventions here today Bob is an issuer and Alice is a verifier. And then the, the thing that's often overlooked is that these roles are not mutually exclusive that we need not have issuers holders and verifiers as three separate parties. The use cases we both in the DCO have been closed ecosystem use cases where an issuer is also the verifier of their own credential and can be the credential can be used by a holder to gain access to be can be used in in identity access management things physical plant access proof that you need to go get a paste up or get some service from your own employer can be a situation where let's say a bank issues an account holder credential to you and then you use that to get back into the bank to access your account so just keep in mind that the roles although we'll talk about them mostly as three separate things are not always mutually exclusive. And then, of course, to bring it back to where we started the code architecture so we have hyper ledger indie areas and Ursa at work here for you today and that's not my role to talk about how these play out that will be coming up shortly. But this is why we're doing this here with hyper ledger and they're a great partner and we have a lot of great resources to help you navigate all this so that is my level set on where all this comes from. So I'm going to now turn it over to Sam current for some info on community. Thanks Scott appreciate that. Let me get the share button to behave. All right. I'm talking about community engagement we're going to talk about some tech to the technical stuff of the and start with Indian and areas and talk about how it ties together. We're going to talk about engaging in the community hyper ledger is is an open source foundation as part of the Linux foundation and and we are a community that that that gathers together and works together to make these things work and so I'll talk a little bit about that. I am Sam current. I'm an architect within DCO. And I work with the areas working group of very heavily and also the drifted com working group as well. There's my email address and I am telegram Sam on all the socials. So if you want to connect with me please reach out and I'd love to hear from you. Why would you engage. There's a there's a handful of topics here and I promise I'm not going to read all my slides. The one that I really want to highlight here is that when you engage with the community the community can be more aware of the types of use cases that you are trying to to serve. And that can help the community to be more aware of that and help guide and include your use case in the in the types of projects that will be well supported. That's not the only reason of course, but that's a really useful one. There's oftentimes communities benefit from people coming and in sharing how they're using the technology and and that's a learning experience for the developers and other developers and and can help the project improve and be substantially better. The. There's there's other reasons of course in contributing and everything else. The other thing that I really want to highlight here is that this is we think about open source as as developers and that's true we developers are really useful. There's more to be done and user experience design documentation project managers people to organize efforts. There's all sorts of folks needed in the in all of the projects that we'll be talking about today. And that is and so we need you, even if you're you don't fit a strict developer mold we hope that you will come and and join us in our efforts. There's three three plus projects that we're going to be talking about today. The the hyper ledger areas of course is a little bit of the main focus, but that's supported by by hyper ledger indie as a as a verifiable data registry underneath. Also hyper ledger Ursa is the cryptographic portions of what we're talking about today and just barely we have the new hyper ledger on non creds project. This logo came in just a couple of days ago after being approved by that by that project. And a non creds is is a way of expanding the verifiable credential formats that indy has used so that it will work not just on indy but also in other environments as well which expands its its potential for use. And so the three plus now for projects that kind of form a cluster of the of the functionality that we're going to be talking about here today. So all of these projects are great and focus on different aspects of what's going on and all part of the hyper ledger foundation. There's the decentralized identity foundation is is another organization that has some good stuff going on to have interoperability working groups. I mentioned the did come working group that I'm a part of also identifiers and discovery and claims and credentials, etc. And they can be available. They can be founded identity foundation and they're doing lots of good work there. Trust over IP is another organization that that has been involved in a handful of different different aspects. The trust over IP layer model is also an incredibly valuable sort of way to think about the different types of both cryptographic and human trust that are involved. And so they they're another useful organization to be aware of the W3C credentials community group is has focused on the did core spec is an output of that particular group. Which which we rely on for the for the for the aspects of the of the of the puzzle that it provides. And also the W3C credential format is a popular format for credentials and there's an existing one there and there's also discussions about the next one that are going on in that group. As part of their work. So, there's all sorts of different working groups all over I just highlighted, of course, I'm not just hyper ledger but three of these other organizations and there's a few others. And all of those have multiple working groups. And so each of those working groups communicate. I'm going to talk a little bit generally about how these groups tend to work to help you get familiar with them and and find it easier to to get involved. The working group communication usually has several different aspects. There's usually a chat via slack or rock a chatter IRC so that you can ask a question. And anytime there are email lists and those are often better for sort of longer form discussions or things that are happening. And then a working group calls are very popular for joining together and and asking questions and learning. And that's often a really good place of learning because it's very organic people can ask questions and discuss things. And so the calls are very valuable there. As you're getting involved. Calls are, of course, being a big aspect of it have a couple of common elements. There's usually a antitrust policy that's present in calls and a code of conduct. And the reason these exist is to help us collaborate with each other and and not cause any legal issues with what's going on and of course to be excellent to each other and to make sure that that all are welcome and represented. And these policies do contain typically information about how to resolve issues that may come up in many calls that these are highlighted or they're linked to in the agendas to make them very easy to find. Luckily, these types of issues are really rare in the community. And so it's a very healthy place to be, but they do exist on purpose. And they're available to for use whenever necessary. Meeting agendas are a powerful thing that help drive calls, but also help you be available or know what's going on if you happen to be a miss a call. So to be aware, topics are almost always welcome in any of these calls if there's something that you would like to bring up to ask or to share or to present or even if you don't have the answers but you want to bring up a scenario and learn more about it. Those, those are almost always welcome and just contact the chairs or the coordinators of the meetings. And they can help get you on the agenda, maybe not the current one, but one coming up soon. There's various forms of interaction in meetings. Sometimes there's a formal queue where you use a command in chat like q plus or something to add yourself to the queue. Sometimes, you know, they'll utilize the zoom raise hands feature in order to to manage that. And sometimes it's just to speak up the larger the meeting, the more formal the interaction tends to be it's different if there's, you know, 11 of you there than if there's like, you know, 150 of you there. And so that will vary depending on the size of the meeting and sort of the culture of the group that is organizing it. They'll almost always mention what the form of interaction is preferred to be. And you can always ask in the chat and someone will let you know. The chat itself is almost always active in the meetings and it's a great way to ask a comment or, or, or, or ask a question or make a comment on something that's going on. And that could be a really useful way if you're unsure of something and don't really want to verbally speak up, but you want to ask a question that's a great way to ask a question. And you'll find that people are pretty friendly about answering those calendars will help you know when the meetings exist. They're generally run by the organization. It's at the top so hyper ledger has a calendar that highlights all of the different meetings that are there. The decentralized identity foundation runs one separately, etc. Leaks to the to the calendars can often be found in the repository or the agenda that that's running it. Those are often on a wiki to help you understand it. So once you sort of find one of these things it's relatively easy to look a little bit and find all the information that you'll need. One thing I'll mention on calendars to be aware of. We're in daylight savings time shift season around the world. Most meetings are anchored in one time zone, and they will follow the time zone shifting the day life savings time zone shifting of the time zone it's anchored in so let's say that's us Pacific. So the the time zone shift will happen when that happens. And so if you're somewhere else around the world, they either does daily savings time or not be aware of that. Generally, if you use a countering system Google Calendar does this relatively well. If you copy, you know, a calendar event from from a source calendar on to your own, it usually brings the home time zone along with it. But it will automatically update as the daylight savings time stuff changes around the world. It's about twice a year we have a little bit of a of a of a kerfuffle there that that happens as people get used to that so So yeah, lots of stuff going on like that. Excellent. So, meeting recordings. I can never make all the meetings that I want to meetings are often recorded and and it's it's worth being aware of that by being in the meeting and contributing you consent to being recorded or to have the things that you type of chat represented. So be aware of that recordings are a great way to catch up and it's really useful. Most video players will allow you to play a video or the audio back and faster than real time, which can be really helpful when you're trying to be aware of of what's of what's going on in meetings that you you doubled up on or if you are on vacation and you want to catch up later then that's a that's a great place to do that recordings usually posted on the agenda, sometimes in another place. The diff for example has a channel that all the reporting the recordings get posted into, please ask and people will tell you where the recordings are, or I occasionally I forget to post recordings if you ask about a missing recording then they can usually get that uploaded for you. And so that way you can you can follow up and be more aware of the things that are going on. Most of these working groups use GitHub as a way to organize things. And there's a few common elements GitHub has a lot in it, but those common common elements that I want to highlight here that will help you, you know, be aware of how to use it. The read me file that displays when you first look at a repository often contains the meeting info and and also links to or contains rendered copies of the content. Sometimes there's the source files that are often in Markdown, but they can be organized into somewhere and so if you read the read me at the beginning, it'll often point to the place where you can find a rendered copy of the of the files that are in the repository. That's particularly true for spec work, it'll often point to the rendered version of the spec and that's often easier to read than then navigating the files in the GitHub repository itself. In the issues tab, you'll find open issues and these are really useful ways to help you orient with the with the community into contributor be involved as well. You want to read open issues or you can look at recently closed issues. You can comment on issues that are important. And if there is something not being discussed that ought to be then opening a new issue can help you to ask a question and get people to respond to those issues are a great way to organize conversation around a particular topic. And to figure something out or there's confusion on an issue. Pull requests are our updates are that are active to the to the repository. And, and when folks submit pull requests they're usually available for commentary for people to read and review them, or point out additional things that can be changed or typos or or things like that any errors made in the pull request before it's committed. And that's useful. Being aware of the of the pull request in a community can help you understand the movement that's going on and the work that's being done. And of course I'll talk in a minute about how to commit your own, your own changes or suggested changes to the repository. The commit history inside of GitHub is also incredibly useful so that you could see what has changed recently and it'll tell you, you know who submitted the pull request. And the time that it was and of course you could see what the what the changes were as part of that. And so becoming sort of familiar with the recent history can help you understand more of how the community is going, like where it's going to, and who's contributing and the types of changes that are being contributed. And that's helpful to understand, but not the only the state of the community, we can help you understand where help might be needed or ways that you might be able to contribute. If you contribute your code, there's a there's a pattern here that will help you a little bit. The first thing that I need to mention is to mind the code, mind the code license associated. It's often Apache two or similar, and you can find that often in the read me or in a license file in the repository itself. And that will tell you which license it's under. And you want to make sure that you understand the license and particularly the organization that you might be working for can help understand or is aware of the license on the repository that you might be contributing to. That's not typically a problem, but you want to make sure that that's clear before you before particularly you commit resources that the company has paid for in order to for the to the open source project that that you're looking at. So when you do want to make a contribution and this this is both code and some repositories have, you know, markdown files or documentation of things and this applies to that as well. You want to work on a repository fork, get has the ability to create a forks for temporary sort of manipulation and comparison. And so what you want to do is fork the repository into your own GitHub account and then create a branch on your GitHub account for your contribution. You want to be aware of the developer certificate of origin requirements on commits that's usually a setting that you can change in the software that you use, or that you can manually add. And so if you Google up developer certificate of origin, you'll find more information there. And then once you've made the change, you want to put push your branch from probably your local system to your GitHub fork. And then it makes it easy to create a PR to the main branch of the original repo from the fork in your in the branch of your fork. This makes it easier to update your pull request. If there are suggested changes, you can just change and commit it on your fork, push it to your own, your own fork of the repository, and then it makes it easy to compare and then accept pull requests. And so pushing back to the open source code base is sort of the foundational way to support the community. I want to highlight that you don't always need to have very substantial changes for them to be useful. Some of the most useful pull requests that I have reviewed and received are in the forms of a clarification or of a typo or a grammatical error that's made that might trip someone up, particularly if they're not a native English speaker. And the text happens to be in English. Fixing those things can be really helpful to help people understand what the code is saying or what the documentation is saying in a really clear way. And so don't feel like a contribution has to be very substantial. In fact, for your first one, it's useful to find something small that needs to be fixed and kind of go through this process so that you're familiar with how it works in communities love pull requests. That's how that's the lifeblood of how things improve. So I want to finish up with guidelines for engaging in the community and when I say in the community I mean any of these working groups in any of these organizations. And in any of the forms that we've talked about whether it's commenting on an issue or participating in a meeting, or any of those things. Those are all types of engagement within the community. Please speak up. You are welcome and you're needed in the work that we are doing we have to understand the types of problems that need to be solved by the software that we're engaged in for us to do it effectively. And please speak up, particularly if you don't feel like you fit in the diversity of our contributors contributes to the, to the success of the project and solving problems in a universal way that fits all of our needs. Please stay please speak up be persistent with your contributions most open source maintainers have jobs. Sometimes they do it in the weekend sometimes as part of their job but everyone is busy. If you submit something like an issue or a pull request, and you and it's not being acknowledged then please like make a comment or raise it again and be patient with the busyness of others we all have stresses in life. Sometimes we're busy. Sometimes we've had a bad day and and and we may not all respond well and so please be patient with with us with each other as that's happening and assume that people mean the best. Having said that do not put up with bad behavior. If someone is acting poorly is engaging in behavior that is that is simply not acceptable, then engage the code of conduct seek help from leaders and other members of the community in order to to to push back against bad behavior and not allow that within our community. Again, it's it's rare, but this is something that is unacceptable and that we need to we need to make sure that we do not allow to to persist. So, hopefully this helps you gain some confidence and and a little bit of understanding of how to engage in the community and the way that this can can serve and be important. We are we're grateful that you're here at this training and we're grateful that this community is one that you have interest in and we hope that that you feel welcome here. Again, I'm I'm I'm Sam please reach out if you have any any questions or any of the things that I can we can help you with. And we would love to help you get engaged. I will be monitoring for the rest of the of the training here I will be monitoring monitoring the areas channel on on the hyper ledger discord and so please reach out if you have a question. The questions you asked there great because the last past this training and you can pull it back up and see what the answers were. Of course you you're we're welcome to ask questions here as well, particularly as it relates to you know something immediately on the slides or or or or something specific to the presentation. Any general questions about areas or any of the technologies that we're talking about might be better in the in the in the discord chat channels. And so we're running just a little bit ahead of time. Lynn, are you ready to pick up with the next section. Yep, I'm ready. All right, we're going to pass it over to Lynn Ben Dixon. Well thanks Sam. Sam was right we are running a little bit ahead that just a quick reminder we have plenty of time for questions here we kind of expecting this to be interactive and post your questions in the chat and we can answer them. As we go along here. Feel free to, if someone sees a question in the chat that that I can answer during my part here, feel free to speak up. So I'm going to be moving a little bit more technical now we kind of been doing some high level overview stuff and with the Scott talking about identity and Sam talking about the community. I'm going to talk a little bit about the tools. It's still kind of baseline stuff where you know, not really in depth yet and that'll be after I talk there'll be a break for 15 minutes and then we'll get more in depth after that. And there's a, an opportunity to, to kind of do some hands on stuff during my part here but I won't be specifically doing that you'll, if you want to bring up your any CLI be able to type in some of the commands as I show them and talk about them, but we won't be taking a lot of time out to do that. If you want to do some more hands on in DC li stuff. Since we're kind of ahead we might have time at the end of the presentation, a formal presentation to go over somewhere of that for those that have more in depth questions. So, I am Lynn Bendixon, the director of network operations in DCO I've been working with in hyper ledger indie networks for about four and a half years now so I'm one of the experts in the community on that. You can, you can see my email address here for free to send me an email with any questions and then on discord and GitHub and a few other community to get type places you can reach me at admin Bendixon. Also, Sam mentioned that there's a trust over IP is one of the groups that you can get involved with. I didn't ask to be a co chair of the utility foundry working group that work group kind of tracks public utilities or hyper ledger indie networks and we write papers to help people to build those so that's kind of where I have to contribute. We're going to start for the tools part here today is with the indie scan tool. And again we're just going to quickly go over this if you have more in depth questions about it, feel free to ask. This is just kind of to let you know that there's a tool out there available for you to use the tools that I will be sharing today are the in DCO specific in DCO implementations of the tools. Other networks have also implemented the same tools and you can refer to those for the network that you're most interested in, but we'll just skip over here to the indie scan tool. And show you that you can select your network across the top here whichever DCO network you want to look at. And then now below it shows the main sub ledgers. I like to call them. The domain transactions are the, the names and the schemas and the credential definitions all go on to that ledger. Domain transactions are just adding and subtracting those from the ledger. The config transactions are the off rules. So most of what you'll be looking at as a developer this extent of a developer tool, you'll be looking at the domain transactions and trying to see, you know what it looked like when you tried to write your transaction to the ledger and to get more details on that you put it up here on the thing that says domain, and then you can search for your, the kind of transaction that you just wrote, I just clicked on them up here to highlight that one and so now it shows all main transactions. You could highlight your, your did and put that into the search thing here and search for your, just your dead and whatever means your did has written to the ledger. Etc. So that's pretty much the scan tool and a little introduction as to what you can do as a developer to use that tool just to see how things are going with your code that you're writing to try and write to a ledger. I forgot to mention that was written by Patrick Stoss and he donated that to the community, several years ago, and I just took his code and made a few config changes to make it. Okay, before we get into the next tool, let me give you a little bit of background on the roles that exist on a type of a journey network. There used to be kind of a hierarchy of roles where, you know, the trustee was at the top and he could do everything. And it kind of went down from there but that's not the case anymore. Each role kind of has its own separate things that it does this and there's not really a big role that can do. I'm sorry about that. They can do everything. So since you're just going to talk about some of the roles here, the ones that we care the most about are a trustee for this training today and trustee is the one that writes admin transactions to the ledger. I also have the privilege to write endorser gigs to the ledger. So when an endorser joins a ledger, it has to have the approval of the trustee. And the endorser, the thing that they can do is they can write credential transactions to the ledger. They write schemas, financial definitions. They can write author nims to the ledger or author gigs to help with the making so that the authors can send their transactions to the ledgers. The authors create transactions, as it says here. They do credentials from that created transaction that they built, but they have to have an endorser sign their transaction before it can be sent to the ledger. And then we mentioned here too that there's a transaction author agreement that must be agreed to before any of these roles write to a ledger. We had a we had a good question in chat that I just wanted to raise what is an example of an administrative transaction that a trustee might engage in. Good question. So one example is adding a node to a ledger ledgers start off with a certain number of nodes and then as node operators join the group they need to be added and removed. And that's one thing that a trustee can do trustees can also add other trustees to the ledger. And so that's kind of what we mean by an administrative transaction. Anything to do with the, the running of the network so to speak they can also add off rules, etc. said answer the question. I think that sounds great thanks then. Awesome. Okay. Let's see if I can move this chat somewhere where I can keep seeing that there we go. Okay. The next tool I want to introduce is one that I think that Daniel and sure we'll, we'll talk a little bit more about later the actual place where you would use this tool during our hands on portion of our area stuff today. The self serve tool is for demonstration or test networks to help to add an endorser or it says a privileged name here but essentially an endorser to the test network so I helped to I took a tool as someone else had written and made a few changes to it and it's kind of a community tool now where others have made some great changes to it as well. But this helps to test things out before you get to production the production networks will regularly require fees and other things to be able to get an endorser did on them but for the test nets, it's all free for most tests now so at least for a certain period of time. And so this is the tool. Essentially all you have to do is put the network in and then did in the very key. And you can create that didn't work in a number of different ways and we're not going to go into those details here. And there's some instructions here at the bottom, a link to some instructions to help you figure that out, but just put your data and work in here and click submit, and it puts. There's a wallet on the back end of this it has the permissions to be able to write your did to the network as an endorser name. So that's what the self serve tool is is to get it so you can do stuff on that work essentially. So another tool is the DC a lot. This is a command line tool for managing local wallets and dids and other things there's other tools that do this this is kind of a baseline tool that can can be used to do some of those things in you and manage networks it's kind of an administrative tool. I use it most when I'm watching my networks or creating new did, but the creating a new did can can happen in the Aries toolbox that Van Sargan introduced as well so as a network admin, I do think just a little bit differently, but there's a little bit of intro to Indie CLI. During the rest of the presentation you probably won't be using the Indie CLI will be different tools and so just going to show you this really quick. So this is a part where you can, you can try these commands if you want to. If you've gone through the, the workshop prerequisites that had you install these things that installed the Indie CLI so you could try it out and we'll leave that to you to do either during the during the break or in another time, but essentially when you get started you can it kind of goes in a in an order here. The. You need to create a pool and essentially you're not really creating a pool the pool really represents the network you're creating a pool handle. So, just so you see is for clarity here. So you create a pool handle based on a Genesis file. So we're showing here, and then you connect to that pool that you just created. And then you create a wallet wallet holds all of your keys, and it holds your dids and other things you can create multiple wallets. I do that. If I, since I manage multiple networks, I create multiple wallets to manage the dids for each network to keep them organized and separate. So you just create a wallet and add a key the keys like just like a password and keep your wallet secure, and then you open your wallet. You can add a bit to it. Sorry. The metadata part of this that if you just do did new it works. It's a new did but then your did is in your wallet. You do a did list you just show. So then you don't know what it was for the metadata helps to make it so when you do a did list you can see what each of your dids is in your wallet. I like me with, you know, 100 or so different dids that I use for different purposes this is required. You only have one or two you might be able to remember it yourself but this, this is your useful to have the little text message that tells you what your did is for when you create it. The did this does not put your did on the ledger. This just adds it to the wallet, and then someone with privileges will need to add that did to the ledger if you want to do some things on the ledger. To do things with that new did on the ledger you do a did use. And then, and you'll be able to do things on the ledger as soon as your did has been added to it, and you do this did use man. So that's set you up. This is kind of the setup. And then you can run a few commands. So I already mentioned listing things you can do a pool list all this but did list this list of those different things. Sorry, I touched my mouse funny switches on there so about that. This, the tab tab here shows some command hints and stuff that you do a help then it shows more details of helps. So you type in ledger and then hit tab tab and it shows bunch of stuff that you can do with the starting with a ledger command. And then you just type in help by itself that shows the main commands, which is these ones here cool law did unless that's the help man up here. And then you can get specific help as it goes down as you see here. And then if you want to go and check out something on the ledger you can do a get nim or get schema and those kinds of things. But in this case, since your name wasn't put on the ledger yet it would, if you've ran this command it will show that your data is not yet on the ledger. And that is the end of my part of the presentation. Are there any questions at all about any of the tools or anything to me to see anything more in the chat or as a question about tools and stuff that I can help with. Thanks a ton everybody. I came in late, I'm sorry, will all this be shared an email or somehow. The hyper ledger folks. Last time we did this they shared everything. With all of the participants. So these slides will be available and then they'll be a Also, the recording of this will be available in a couple of places. We will send out an email to all the registrants with just basically follow up links. And this is being streamed to YouTube and it'll be on the hyper ledger YouTube channel. You know what's in codes. But yeah, we will send out a note to everybody with links and follow up. Thanks, thanks. There's also a handout that is That I just put in the chat which has links all the sort of the relevant things for our workshop here today as well. Well thank you everybody so much for joining us today again and we'll, I think we're ready for a short break. Those of us are running this will. Those of us anyway will likely be around during the break if you want to ask some questions but it's straight up at the top of the hour, I believe we will return at 15 past the hour we're going to have a 15 minute break here. Thanks again during the break feel free to ask any questions about anything else related to hyper ledger as well. We have people on on the call who can help. So we should mention what this QR code is for. Hey, Daniel, you can see the right word so what that's pointing at. Daniel steps out just quickly before he gets started next time so I don't think Daniel's going to. This is the handout link isn't it handout that's the right word. Thank you. So this is the handout link that we also pasted into the into the chat so it's the same thing will help particularly in the next section with Daniel and Shar as they walk through areas and how to make all that magic happen. Matthias, I see your hand up. Yeah, thank you so much. Thanks again to all the presenters I'm learning a lot. One of the conceptual things that kind of struck me is and it makes really sense right but that issues can also be verifiers to give a holder access. I'm thinking to the earlier example with the Gmail Gmail and everything right passwords and whatnot. So I'm thinking of issuing and verifying it did for holder to give access to a system. I'm wondering if in the or areas or essentially it did solution can be utilized to give temporary access, whether that be reading or writing to fabric, actually, is there a for for some more business logic. There are some community meetups that have discussed the integration between areas in fabric. I think just the previous one and I can find a link for that. The, you can absolutely do that. So it's worth noting just from a turn terminology perspective that we don't typically issue dids to individuals they create their own. You can see the inversions and decentralized identity that happens, but you can still, you can still verify via two different ways. One is you can verify did ownership and engaging in a did com protocol gives you that kind of by default because because of the authenticated encryption that's used. But the other thing that can be useful is actually a credential. And so if you issue a credential of permission, if you will, then you can verify that on the other side independent of what did the other party has in particular. So there's a variety of approaches that you can use depending on the particular case that you've got. But that's absolutely a useful thing to do. In order to authenticate or authorize folks that are participating in these various aspects of of the systems you mentioned fabric, of course, but it could be, it could be applied in a variety of other ways as well. Great. Thank you. Hey, Sam. Yes, this is this is a show brother from instant. I have a couple of questions. One question is that right now the area stack which actually created out of indie to make it more independent and leisure agnostic. Right. Do we have any other implementation other than indie at this point. Other than indie of, of what now. Yes. So, well, there's a variety. For example, there are did methods that lean on Ethereum or on Bitcoin. For example, as a verifiable data registry for those there. They don't all have the same features as indie indie is relatively unique in what it provides. And so because it's purpose built for identity and doesn't have some of these other features that doesn't do smart contracts that doesn't do. There's not a financial transit transfer aspect of it like Bitcoin has for example, then, then it's different than those, but there definitely are other ledgers or other sort of VDRs that you can have underneath the stack is does that answer your question. Yeah, yeah. I'm still trying to get my feet weight around all these terminologies and concepts, but yeah. Sure. There's a lot to learn and we hope that what we have today can help a little bit anyway in the, in the wide world of all of that so we're grateful that you're here. That's right. Thanks. Good. Hey, this is why I have a really bad throat. Sorry if I sound terrible. I'm trying to understand more about the an on credit separation and whether an on credit scan today be implemented separate of indies or if that's the direction we're going. So what's the timeline I'd like to understand more about what's what's around that separation what it means to the broader identity space. Yes. So a non creds is a critical format that was initially developed and it was tied specifically to in the sense that you needed the assets involved to be written to an indie ledger. The non creds effort that that's going on in the project is working to make that optional in the sense that you don't strictly need to you can use other things underneath it. Besides any two as a VDR it's worth noting that you you'll need to be prepared to provide a VDR that has a sufficient number of features in order to to make it useful. Just because India is optional. Indie still turns out to be an excellent choice for anyone implementing it on credits. And so I don't I don't want to convey the idea that you'll simply be able to walk away and not use anything underneath it in order to and get the same features that you would if you were basing it on on top of an indie ledger. So it's a little hard to predict the timeline, which I believe was your the main part of your question about how long that's going to take the work has been underway for a little bit and then is has just recently moved into the hyper ledger umbrella as the non creds project. And so those those meetings are ongoing. It's a little hard to hazard a guess, but I'm assuming that at some point next year that spec will be more fixed. There are folks that have done implementations of a non creds that are separate the checked network is one of them that does that. But they did so sort of in a pre standards way I the check folks are great. They're involved in the process and and they to my knowledge anticipate. You know, making any corrections necessary to their implementation to make it abide the future a non cred spec of which is done yet, of course. So, it's not going to be weeks. It'll it'll be. It'll be more in the timeline of months before that's ready, but that that that is a direction that it is going. I do though want to highlight and there's not enough written about this we're working on that to highlight the fact that even if you don't have to use indie. Indie turns out to be a great and answer in a very large number of cases. And so, the use of use of an uncred without Indie is generally going to be useful when folks already have a ledger that could provide the same sort of features like immutability and high availability and attack resistance and things like that. Yeah. And you said that there are some minimum requirements for the better faculty to registry. Would I find that information in the high political sites and you won't yet. So that's one of the pieces that hasn't yet been added to the to the in progress spec for for a non creds. Okay. But that will be coming. And one question I've always had is a non credits W3C compliant. So W3C, the W3C data model spec has opinions about how the data ought to be oriented. Part of the effort, the non creds effort is a is making standard a representation of a non creds in the W3C compliant format. So a non creds isn't yet today. And mostly for historical reasons a non creds was a thing, you know, the current version of a non creds it's running it has been a thing before the W3C data model was created. Yeah, yeah. The next version of a non creds that's definitely a focus to include that ability. Gotcha. Thanks a lot. Yes. Okay, I'm going through the questions here. I'm going to go through the fabric meetup link in the chat for anybody that wants to see that there's excellent. Yeah, and that was a community meetup that in DCO did where we had some, some folks talk about an integration that they did it may not perfectly answer your question about fabric and any integration but I thought it was definitely worth mentioning giving, given that the combination of the two technologies. Thanks, thanks Lynn for grabbing that link. Daniel, and I apologize if I butcher anyone's name is, is there a way to transfer somehow the holder right of a verifiable credential to another holder. So, there's a couple of things you might mean by that. One is, is that can I maliciously transfer that to another person, and I can't without revealing keys that would compromise all of the credentials that I held. So, so there's not. So there are protections in place that make it, you know, a severe disadvantage to do so. For high, I don't get too deep into this at this point, but there are methods for really high value credentials with liveness checks and sort of other other links to things like that that are not used typically in the general case but, but can be useful for high value cases. If there's a legitimate way meaning you want to be able to transfer credential from one person to another person, the word typically used there is delegation. So you would delegate a credential to another, and then it's up to the verifier the credential whether they accept that delegation or not. And so there's a there's a handful of complicated bits in there. There are some policy ways of doing that and some cryptographic ways of doing that. But the, in general, the cryptographic or the verifiable credential community has not really standardized delegation in that way. So hopefully one of those two things were related to the question you're asking. Okay, we got one more minute left on break. Get yourself situated and get ready for the, the Aries part of our presentation one minute left. Hello, I don't know if I'm allowed to use the mic, but I have a question for someone can answer that. Yeah, go ahead and ask. Yeah, because I raised my hands. So no one answered. I'm new here. And I really don't know, like, about the tools and stuff. But is it close to HTML that we use it at the background of websites? And is there any reference that I can start with to complete with you guys? Like, I can like study or see it on YouTube to like, I can complete with you and about the tools on these stuff. There are pieces of the technology that have some similarities to HTML. One of them would be the credential formats themselves. There's some cryptographic things in there as well. But, but there's data formats that sort of separate that out. There's also communication protocols and the ones that we'll be using today is is did calm and did calm itself has messages that also have some some similarity there. Most of these things are based on on Jason, not HTML. So the JavaScript object notation. And so this is more of a. This is more of a way to communicate between systems rather than than then present stuff directly for for user viewing the way HTML typically is. So it's a little different, but the next section Daniel and Shar will be talking about tools that exists within the areas ecosystem to help to to get in and started in some of these things. So, hang on and and I think that you'll they'll have what you're looking for in terms of ways to get started and understand what's going on. Thanks for asking the question. Sorry, we didn't get your hand earlier. Baskar your hands up. Haven't having trouble hearing it. There's lots of background noise. I'm sorry, Baskar. I can't really. Baskar put your question in chat. We can probably answer. Excellent. And Daniel Europe looks like you're about ready to keep going. Indeed. Welcome back from intermission we had a lot of great questions going on there. Please feel free to continue to put your questions in the chat, especially as we continue throughout our presentation today and Sam will help to moderate those raise them as needed and redirect you to other resources if that's helpful. So we are going to come back together again. Another reminder, we have a handout by campaigning, accompanying our presentation. I've been posting the link religiously throughout this this presentation I'll post it yet again just now. So that handout includes useful links it will include the commands that I'll be using to start up the agents that we'll be looking at here in a moment. It also links to the prerequisites that we send out as part of the workshop if you haven't had a chance to go through those go through those prerequisites yet, but would like to try to get on top of those before you get into the hands on portion later on. Yeah, so let's go ahead and jump into it. So, we will be talking of course about hyperledger areas. I am Daniel bloom. I am a software engineer at in DCO. Also team lead I guess I should mention that you can reach me at Daniel at in DCO tech, my get up handle is the bloom, and wherever I can get that handle I use that on my social media but might not be the same across but yeah definitely reach out on over email if you need to contact me or get up that will work as well. I am a co chair on the diff interop working group. I've also been a long time contributor to the hyperledger areas project. I've been around since even before library areas project existed, as did several of the folks at the in DCO tech team so started off in the in the agent working group working on the Python reference agent, which eventually influenced at least the creation of the areas cloud Python code based that we will be using today, and of which I am an active contributor and maintainer really enjoy working with the government of British Columbia to co maintain that project with them. Yeah, excited to be with you today, and to talk about hyperledger areas and with that I'll hand it off to sharp or a brief introduction of herself. Thank you Daniel. My name is char howland and I'm a software engineer at in DCO there's my contact information as well. I'm an active contributor on the areas clouded in Python ecosystem, developing plugins and other purpose built agents and protocols. I also co chair the identity implementers working group and the indie contributors working group. I'll just take a brief moment to plug those and encourage you to join the indie contributors working group is a great place to start getting involved in the community we take a high level view and track progress across the different working groups in the community and the foundations that Sam mentioned earlier, we invite speakers each call to give a presentation on a relevant topic. And the indie contributors call of course focuses on the hyperledger indie project that Sam mentioned earlier. So please join us for those. I've got my hyperledger mug here so I'm all set, and I'm excited to be here today and talk about hyperledger areas. Cool. So, we've got a stack set of on the agenda for today will first start off with kind of a conceptual discussion of what's going on between areas agents and the network. We're focusing on first identity agents what even are they moving on to the agents and the specific tools that will be using for demonstration today. Then getting into some of the details of the protocols in the messaging that's taking place in the background. And then the last portion of our meeting will be a hands on with areas clouded in Python and the areas toolbox, where we'll use those to create connections between agents issue credentials and verify those credentials. So, yet again, as another reminder, we have a handout for this really highly recommend taking a look if you haven't yet. There's some useful links, as well as commands that we will be using throughout our presentation as a reminder on questions. Sam will be helping us to moderate questions during our, our walkthrough of our slides today, especially as we get into the hands on portion. So, please put your questions in the chat. I'll ask you to refrain from coming off mute while we're talking just to help me keep my flow going a little bit but Sam is is more than welcome to interrupt us as needed, especially for highly pertinent questions to the presentation. I also encourage you to ask your questions in the areas channel on discord we have Colton Wilkins from the DCO team and others who are monitoring that channel. So if you have any deeper technical questions that you'd like to ask or are having technical difficulties. They are there to help out as they can. The last time the BCGov team was getting involved in answering the questions and it was a really good synergy that we had going on so I highly recommend getting plugged in there on the discord channels. We are after all, apologies, in addition to the handout I also put a link in for the our prerequisites document that tells you how to how to get set up so that you can do the hands on part of it. The Indy CLI setup in there is not required. Thank you for that reminder. Yes, let's see. So Sam may actually redirect some of your questions to discord. I bring that up just so you're not offended that the question isn't being answered by or anything. I think there are just some questions that live better in discord. First of all, because it's excellent to get input from other members of the community, but it's also a public record of the questions and their answers. If you have a question it's likely that somebody else has a very similar or the same question that you do so sharing that question and answering processes is helpful for the whole community. After all, we are a community and we'd love to have you join us on discord. So, with that, jumping in on our first conceptual topic here which is agents, what even are agents. So agents are the pieces of software that we as humans use to help us interact in a decentralized identity world. I don't know about the rest of you but I don't do too terribly with math but the encryption and cryptography that are required to interact securely over the internet is just beyond my capabilities. So we use agents and other software to help us to accomplish these goals. So when we talk about agents, we generally mean identity agents within the context of hyperledger areas. The term agent is quite overloaded in the general tech space. So we are talking about identity agents specifically. These agents are again the software that helps us as individuals as people to interact in a decentralized identity world, and it will generally act on behalf of a single identity owner. And by identity owner, I mean either us as individuals, or it could be an organization like a college or business or bank, whatever, even though there are multiple individuals with identities behind the organization as a unified organization, they have the ability to issue and receive credentials that identify them. So an organization can be an identity owner. It's also possible for Internet of Things devices to be considered an identity owner. Anything that it makes sense to assign an identity and these verifiable attributes can be represented by an identity agent. There's an excellent question in the chat that I just noticed is an agent a wallet. And that kind of leads into the next point here so agents hold and use cryptographic keys to securely perform the responsibilities. The term wallet, as we tend to use it at least within the hyperledger area space refers to that component, the secure storage element backing your agent that helps us to keep our keys private and keep our verifiable credentials in a safe location. In that sense, yes, an agent has a wallet, and we will often refer to, or sometimes I guess refer to agents as wallets especially in the mobile wallet since when we have an application on your mobile phone that helps you achieve all these goals. So we would narrow the scope of the definition of wallet to that secure storage element, and then the agent layers on top of that secure storage all the other interactions that make the did come ecosystem, a rich and secure and mutually authenticated mechanism for communication. The summary that Sumo put in the chat I think is a good one the agent is active and the wallet is a more passive component I think that's a fair representation. In fact, I thought that was Sam that was posting that comment that that's how on point it was for for me in my view so thank you for that input. Cool, so identity agents also interact with other agents through did communication protocols, and we will get more into the details of did come in those protocols as we go throughout our presentation today. So when we talk about identity agents, we tend to have these terms that we use to categorize these agents. There's a lot today for instance is a cloud agent. Since we're using the areas cloud agent Python project. And the term cloud agent generally just refers to an agent in the cloud. It does not strictly speaking imply trust or the lack thereof or imply that it will be one particular transport mechanism or another or ownership. It's merely a, the location of the agent. It's a cloud agent that is owned by an individual, in which case it would be owned and pretty well trusted by the prototypical Alice. It could also be in the ownership of another party, and we rely on that that cloud agent for various services but we don't trust it quite as much as we would if we ourselves owned the agent. The term cloud agent itself just denotes that it's something running in the quote unquote cloud. In contrast to the cloud agent, we have an edge agent which again is merely a denotation of the location of the agent in this case at the edge, whatever that might mean to you. Could be a desktop computer could be a laptop could be a mobile phone could be your tablet. There are just devices at the edge. Some other examples of agent categories that you might hear mobile agent workstation server embedded browser blockchain, as might be embodied by smart contract IOT or mesh and in theory there could also be paper wallets, or excuse me paper agents, but those would actually be used to be accompanied by associated infrastructure to help us actually do communication. So, there's a lot of different ways to categorize and group these agents together. And not all of these, these categories are mutually exclusive either. So keep that in mind as we continue to to discuss agents today. There's a couple of questions and chat you might want to. I've been hitting them relatively. The one that you that you might address now Daniel is there any definition or specific approach or good practice on how access to agent should be performed. That is an excellent question. The most general advice I think that I can give on the spot for that would be to make it as close as possible to the individual the identity owner directly owning the agent and the hardware that it's running on. It's possible to have custodial devices custodial agents, which are agents that are hosted on behalf of an identity owner by another party. Because there are clever ways to make it possible so only the user at the edge has access to the data stored by that wallet. But it's a, it's a difficult balance to strike I would say and I'm not sure that we've, we've seen a good example of that quite yet within the ecosystem. We have of course multi tendency within the context of occupy and I can talk a little bit more about that later. So I'll start off with complexity as much as close as possible to the specific identity owner. Thanks. Yep. Cool. So in addition to these, these categories of agents that we've described we've also have varying levels of complexity associated with these agents. Not all agents are intended to handle exactly the same functions. We have on the least complex side of the spectrum with a static agent. And these are agents that have preconfigured, or in other words, static key configurations and endpoint configurations for a single connection. So using that static configuration they have the ability to send a message or receive a message from exactly one other agent. They generally don't have the ability to change these configurations without the intervention of a user administrator or wherever the device is running. But these are an important role in especially extending out to legacy systems. I might have, for instance, a static agent that gets triggered by a web hook, whether that's from GitHub Shopify or wherever web hooks are coming into your system. A static agent could be used to take that data received by the web hook and then bootstrap it into a did come context and deliver that to another agent. Another example of a static agent might be a cron job running on a server somewhere. If you just have a regular, you know, update on statistics on, I don't know, number of verified credentials or something and you want that to be admitted to another agent, a static agent could serve that purpose. Layering on complexity from there. We have what you might term as a thin agent. This is an agent that looks very similar to a static agent except that its single connection is not available. You might be able to rotate keys or update endpoints as received from a did come, you know, protocol that updated those values within the agent. However, similar to the static agents, these are generally again very narrowly focused on a specific use case within the context of a single relationship a single connection. Let's see if I can come up with a good example of that on the spot. I think these fill a very similar niche to the static agents. But just if you need to be able to have a little bit of dynamic updating of these values. So, where you could use a static agent, you could also be just as comfortable using a thin agent. The next layer up from there would be a thick agent. And that neon ironic sense I suppose. So this is multiple updateable connections held by a single agent. However, the use case for that agent remains narrow. The common example that you might see in the community of a thick agent is a mediator, which is an agent that assists agents at the edge who have inconsistent connectivity to the internet to receive messages in a consistent manner. So mediators have many connections. They're generally speaking you can have a mediator that is only willing to talk to a single connection. But they have many connections and they're willing to mediate messages on behalf of all of these connections. You could conceive that a mediator would want to be able to issue and verify credentials. However, that's not usually something that a mediator agent takes on so it tends to be a little bit more narrow in that sense. Let's see layering on from there. We have what you might call a rich agent. And this actually ends up being the best descriptor. Excuse me. For most of the agents that will be that we see in the areas ecosystem areas cloud agent Python areas framework JavaScript areas framework go all of these are intended to be a rich agent. So they have many connections and they are designed in an end intended to fulfill many roles on behalf of a single identity owner. In this case the the number of use cases that are intended to be met by a rich agent are numerous and we'll discuss a little bit more how we use these rich agents as we go on throughout. So identity agents. While we have these categories and these terms to describe their complexity. There is not really a hard and fast definition of an agent or a category of an agent. These are just words that we use to help us describe the interoperability goals, especially of these agents. So we can say that if something is characterized as a cloud agent. We don't necessarily expect it to be able to run in a mobile context. Or if it's a cloud agent we generally expect that we'll be able to directly post messages to it without the assistance of a third party to mediate those messages. So keeping that in mind that helps us to better orient ourselves within the ecosystem of agents that is continuing to grow as the hyperledger areas ecosystem and the did come ecosystems mature and develop. All right. So that leads us into the specific agents that we will be using today. So we will be using the areas cloud agent Python project, which, and I pulled this directly from the GitHub description for the project. It is a foundation for building decentralized identity applications and services running in a non mobile environment. So again coming back to that cloud agent category. This is the projects that I end up contributing to most frequently in terms of a standalone agent projects. We're going to form the, the primary agents that we'll be interacting with throughout our hands on portion. We also have the areas toolbox, which is again a tool for interacting with areas agents will directly from our GitHub description there. And these are just open source building blocks for developing your own custom software to issue hold and verify credentials. Let's get into a little bit more detail on each of these projects in a moment. So areas cloud agent Python is a server agent. It is intended for use on a server, and therefore does not really have all the infrastructure or architecture that is generally fitting to running on a mobile device, or in a browser, you might be able to close it. There's a lot of creative and inventive people out there that I'm sure could make it happen, but it's just generally not intended for that use case and so you might find support for some of these more niche environments, I suppose, within the ACPY code base. Another project is likely more suitable for your needs in that case. ACPY being a, excuse me, but ACPY being a server agent. Is designed for horizontal scalability. So being able to spin it up in a Kubernetes cluster and dynamically expanded shrink as demand on the agent grows and shrinks is an explicit goal of ACPY. ACPY is a, or has the ability to act as an issuer, verifier and holder of both a non creds. As we've been discussing during the, the intermission especially, as well as W3C verifiable credentials in JSON LD format. Right now our support for JSON LD format is limited to those that are issued or received using ED25519 and VBS plus signatures. But support momentum on that side of the ecosystem is continuing to grow. So our presentation today will be focusing on anonymous credentials. ACPY is also capable of using the revocation that comes bundled with a lot of the functionality of a non creds. We won't be getting into the specifics of revocation today. That could be its whole other separate workshop truthfully speaking. But yeah, it does have support for that. The support is quite mature. We're looking into revocation support for anonymous credentials ACPY is an excellent option. ACPY also supports multi tenancy, which coming back a little bit to that question that was raised earlier multi tenancy is the ability for the same agent to host multiple wallets. This is useful, you know, just for the sake of efficiency, we don't want to necessarily spin up an individual server for every organization that needs to be represented in an identity ecosystem. So using multi tenancy we can double up resources a little bit still in a a secure and privacy preserving way. And that that is a use case that is good in some circumstances and not ideal in others. So it's again a balance that takes place there. The areas toolbox on the other hand is a tool for interacting with areas agents. So the just any agent in particular that is able to speak did come. You can connect the toolbox to it and start sending messages back forth. So the areas toolbox is a development and testing tool. It is not generally intended as an end user product. We think it's great. We use it quite frequently as add in DC especially for the purposes of training debugging and testing, but it is not geared towards an end user which means that we have a bit of a guts exposed mentality to the interface. We want to give developers as much information as they need as conveniently as possible. And so a lot of the debug values or the raw values of credentials for instance are visible to the user of the toolbox, which again is generally not something that I would encourage at the end user level. Once again, there are other projects and tools out there that are far more appropriate to that that use case and we'll discuss a couple of those briefly as we go along. So the areas toolbox is built on top of electron with ujs and element UI should be a pretty comfortable environment to you if you are a JavaScript developer. We again try to keep this a relatively hackable project and welcome any contributions that you may have the toolbox itself is actually an agent, which is something that I hope is not too hard to wrap our heads around but back to that point and describe in more detail, a few times throughout a presentation to help clarify that distinction. But the toolbox is again itself an agent in it matches the thick category of agent in the sense that it has multiple connections, but still has a pretty directed use case it's intended to act as a UI for any agent that's running somewhere else that is generally headless. Toolbox has protocol implementations, just like any other agent and these are encapsulated in their own components. They're generally quite modular which enables us to dynamically determine which UI elements to show on the screen as the connected agent supports that feature or does not support a feature. And the protocol supported by the toolbox are kind of split by a couple of different, but let's see it's, there's a set of direct protocols which are communications occurring directly between the toolbox and the connected agent. And there is indirect indirect protocols, which is in a sense remote controlling or puppet stringing the connected agent to trigger interactions between that agent and another agent. And again, if there's any confusion here I will hopefully continue to elucidate this this particular problem as we go along. All right. So, on the agent side. Quick comment. I'm seeing questions in the chat. If your questions are of a more technical nature. I highly recommend taking those to discord where Colton can assist as much as he is able on those questions. We will get to a point where we will be doing this directly as well in this demonstration so if you want to follow along as I pull up agents and stuff that that could be helpful to your technical questions. So that'll happen in just a moment. Okay, so I wanted to briefly cover a high level description of the architecture of an agent. I purposely layered things out here in such a way that it looks familiar to those of you who are familiar with web server development. And I'll layer on top of this and draw some analogies here so first and foremost we have the transport of an agent. This is how the agent actually receives and sends messages with other agents. Since that is, by definition a significant portion of the responsibilities of agents the transport in which transports you support are a an important consideration when you're building out your identity solution. Just like a web server, where it receives requests over HTTP and agent also receives messages over a transport could be HTTP could be another. After being received on the transport we conceptually have this middleware, which reads the bytes often off the transport adds context to it, and then sends it off to a handler for processing. Very similar vein agents will receive bytes off the transport, those bytes are encrypted. So we first need to pass those bytes into our middleware which handles decrypting the message. Then the message might be a part of an existing exchange of messages, in which case the, the context of the message needs to be populated with the state of the protocol. And then that message is handed off to a handler in order for the next step in the protocol exchange to be engaged in. So we kind of have a flow through our software that looks something like this. Again, quite conceptually similar to a web server. So if that helps you to visualize what's going on. I think. All right. So, Acropy, while containing those basic components that we just discussed, also has a number of other components which help us to control it and extend functionality of interact with the ledger and other agents. So we of course have that transport and our handlers for messages. But in addition to that Acropy exposes an admin API interface, which enables a controller component to exist in the ecosystem, and the controller is responsible for all the application business logic of the agent. The did come protocol exchanges themselves are a very discreet and stateful interaction taking place between agents. We don't want to mix in business logic into that process. However, it is of course necessary to have some control of the business logic at some layer of the stack. We need something that says what values go in the credentials that get issued to these other agents. And that's where the controller comes into play. It can also help you control under what conditions credentials are issued, help you navigate governance frameworks and so on. So the admin API provided by Paris cloud agent Python is again an HTTP generally JSON body based admin API. It's a restful interface. So you can write your controller and whatever software stack of your choice. Generally whatever fits your needs best. It's also, I guess important to acknowledge that it's not a one directional communication between the controller and the admin API. The API agent will also send web hooks back to the controller to implement of asynchronous events taking place on the agent. So this is how we trigger the next step in an interaction or, you know, based on the received inputs have our business logic engage in the next business flow that that is necessary for a given interaction. I see a question in chat only with the controller we can name it as agent. That's a fair question. I would consider. That's a hard one actually. If you conceptually take the controller as the controller of like when events take place, and how the identity is represented by the agent, then I guess you could say that without the controller we don't have necessarily a complete agent. However, it is possible to have an agent that is, you know, it just automatically responds in a given way based on, you know, static criteria. But then I think in that case there's an argument that the static configuration for that agent is the quote unquote controller. The controller is a common paradigm that we see throughout the agent ecosystem. But it's not the only way to do it. So, I wouldn't get too hung up on whether the inclusion or exclusion of a controller makes an agent. As long as the agent is helping us to represent the identity owner and and exchanging did come messages. An example of something else that could or could not be considered a controller, a mobile agent has a UI that sits directly on top of it. And that UI is controlled directly by the user. So you could call the UI the controller of that agent. But again, I'm not sure that that's a strictly speaking a term that is particularly common in that sense. So in addition to the controller, and the admin API, we also have a plugin interface will be taking advantage of that plugin interface today. I'll get a little bit more to that in a moment. But then of course, act by interacts with agents over the internet. So we again use that transport to send and receive messages from these other agents and then act by will also interact with distributed ledgers, especially to anchor public did so that it can issue credentials to holders, or verify those credentials I should call out explicitly. So this is a bit of a reiteration of our conceptual discussion that we had a moment to go about the responsibilities of an agent, but occupy again is responsible for securely storing the secrets in the agent such as the keys associated with our the keys associated with our credential definitions and our verifiable credentials themselves. It also holds those other cryptographic values crucial to helping us in an anonymous credentials ecosystem prove non revocation and that disparate credentials were received by the same individual and so on. So these are all responsibilities of the agent and all functionalities that the act by agent has. Then it connects with other agents and interacts with distributed ledgers which I already said. So act by has an increasingly mature plugin system. This is something that I've been really involved in from the start so I'm quite proud of this fact. The plugin system enables us to do things like plug in a protocol implementation. So we have a generally standardized set of protocols that act by sports and it's got a very batteries included mindset so it's possible that you won't need any plugged in protocol implementations, very possible in fact. But having the ability to bootstrap your own custom protocol for use case or model out a protocol before pursuing standardization of that protocol if you so choose. That's something that we can accomplish with plugins. We've had opportunity to do at NDC quite regularly to help us meet new and interesting use cases for our clients. In addition to protocol implementations we can also tack on additional admin API endpoints. Often these might be associated with these additional protocol implementations in order to trigger the the messages being sent or whatever actions are necessary to proceed with the protocol. It could also just be adding functionality to occupy just period. For instance, occupy might not right now support just directly signing an arbitrary message string with a key associated with the DID. If you wanted to expose that net as an API admin API available to the controller you could write a plug plugin that took care of that for you. You can add custom event handlers within a plugin again closely related with the protocol implementations in the admin API endpoints but occupy has an event bus which fires key events as triggered by other protocols other exchanges taking place. And if we wanted to tack on some additional functionality. For instance, have a monitoring plugin which had some basic statistics that it was monitoring like how many credentials have been issued or what have you. That could be a plugin that's just listening to events and then just pushing to your your logging system. And that could be modeled as a plugin. Additionally, we have an ecosystem of did resolver that did resolvers. These are components that help us to resolve metadata about did and the did documents which contain keys, service in points, things that are critical for us to be able to communicate over did come, but also critical for being able to verify credentials of the W3C Jason all the variety especially. Yeah, that's also possible to plug in separate message queuing systems. The API by default has HTTP transports. You can also use a redis or a Kafka queuing system, and there are plugins available for either of those, which can, you know, help you tailor your, your horizontally scaled occupy cluster to your needs, or to integrate it better with your already existing infrastructure. Hi, plugins. Hi, they recommend the functionality. Again, really proud of this, the system with an occupy. We will be using this plugin system today for the indirect protocol implementation that we mentioned as part of the toolbox. And this, this indirect protocol implementation allows us to remotely control areas cloud agent Python through the toolbox UI. So for our setup today this is generally what it will look like. And I'll again continue to point out what is what as we go through our hands on demonstration, but we have the areas toolbox this is the UI that we'll be looking at. And then we will have an act by agent with the toolbox plugin installed. And this is what will be remote controlled by the areas toolbox, and we'll have two instances of this full setup one for Alice and one for Bob. And we'll have two different areas toolbox windows which represent our view into Alice in our view into Bob. So these are just a couple of the projects within the hyper ledger areas project. There are many other agents and frameworks and other tools available underneath the hyper ledger areas umbrella, as well as, of course within the other hyper ledger projects. But I wanted to call out a few of them here, just so you're aware of the the others, the big players. I say that casually I guess but it's, you know whatever meets your needs is the big player for you of course. So first of all, the areas framework JavaScript project. So this is a very similar project to occupy. There's a lot of shared thought space in that sense it's, it's merely just an implementation within JavaScript. They have slightly different approaches and it's, it's a TypeScript project so if strong typing is your thing areas framework JavaScript is definitely for you. But it also serves as the base for another project within the ecosystem which is areas bifold. And this is kind of right now the open source mobile agent implementation. It's, it's been around for a while now and it's continuing to gain momentum. We use it quite heavily add in DCO. There are, I think there's a team at the government of British Columbia it's also really involved in the development of bifold and there are many other players who are actively working on bifold as a mobile agent implementation. There is also areas framework go, which is a go native areas framework. So areas framework go just to briefly call out. If you go and you look at act by and then you go and you look at AFJ, and then you go and look at AF go, you'll notice that there's a bit of a different approach taken within areas framework go. There's not a better approach one way or the other. They've just chosen to go with a more go native approach so all their components up and down the stack are written directly and go, which could be a huge benefit for you. I'm interested in using some of the shared libraries, such as areas ask are which I'll talk about in a moment. But it is a very mature and very well supported go alternative for did communication using an areas framework. Also notable is the areas framework net. So if you're in the net ecosystem there's code out there for you as well. The areas framework net I'll call out is much more in a similar vein to the areas framework JavaScript and that by taking advantage of some shared libraries as opposed to going hardcore full stack everything.net or anything like that. Another critical projects to make you aware of within the areas ecosystem is the areas agent test harness. And this is a code base that enables us to validate the compatibility of our protocol implementations across these different frameworks. The areas agent test harness can spin up acopi and spin up an areas framework JavaScript agent. And then using back channels that have been developed for this purpose, trigger interactions between acopi and AFJ. Observe the results and see if it met the expected end criteria. And then it's to construct a compatibility matrix between all the different agents and frameworks, as well as determining the compatibility of specific protocols between these different frameworks. It's a really valuable tool for helping us to validate that we are all on the same page as a community. The results of that of those tests are run regularly and and there's a website where you can view the results of those regular runs. I don't have that unfortunately offhand might be able to grab that or see if one of the other and DCO staff could grab that if they happen to be aware of where that is. Let's see other areas agents and services. We have the static agent Python. This is a library that's intended for developing static agents as I described previously. I have to plug this one because I'm actually the core maintainer on that project. It's a good code base. I highly recommend it if a static agent fits your needs, go for it. And then we also have an areas mediator service. This is actually acopi. It's bundled with additional infrastructure and quick start setup scripts to help you getting to a point where you can use acopi as a mediator for use with your mobile agents. This will include like a starter controller which you can then customize to your needs for your mediator service. And then we also have areas as car which I'm kind of using to represent a wider category of projects within the areas ecosystem we have shared libraries. That we depend on to do some some critical pieces of the agent stack areas as car for instance is a secure storage implementation, which is used by acopi will be used by AFJ in the near future. So it's generally helpful for having a clean well vetted interface for possible storage. It supports multiple back ends. So if you are running on any mobile context for instance, you're not going to have a remote postgres database that you're connecting to generally speaking. You'd rather probably prefer having a local SQLite database, which areas as car makes possible to use either one. It's possible to support other databases, but I believe the only databases currently directly supported by ask our SQLite and postgres. You'll also see throughout several of these areas projects, the ND SDK mentioned. We are in the process of phasing out usage of the ND SDK within the area stack in favor of areas as car and a couple of other shared components. The hyper ledger and on credits project kind of plays into that as well. Okay, then finally, one other code base to call out here. One other project I should say is the hyper ledger areas RFC repository and RFC if you're unfamiliar is a request for comments. And this is where actually a lot of the content for our workshop today was sourced. This is where we as a community gather to answer some of the difficult questions and then document the some of those answers in a way that helps us to standardize across the ecosystem. This is where the did come be one specification was incubated. And we'll get into more of those details in a moment. But it also serves as a really good knowledge base for developing your own agent or learning about the ecosystem of agents and did come in general. And with that, that concludes my portion of the conceptual stuff and I will hand it off to shark to talk about protocols and messaging. With all these windows, it took me a moment to find my mute button, but here we go. Thanks Daniel. I'm about to talk about protocols and messaging. So we'll start with talking about did come. What is did come did come is commonly a little bit. The long version is the communication we will hear to for refer to it as did come. It is secure peer to peer messaging. Whereas variable credentials are they contain information about the subject did come is communication with the subject. And so why is this important did come is the way for entities that use did and verifiable credentials to communicate securely and privately. So did come provides the lower layer that enables other areas protocols that we'll talk about in this workshop. It is the communication protocol used for sending and receiving credentials and presentations, but it can also be used for much more than verifiable credentials, including instant messaging relationship based user authentication which refers to user authentication that uses verifiable credentials to verify the person logging in, as well as buying and selling. And so we will talk about some of the properties of did come. The first did come is secure. We know that we are actually talking to the person we think we are talking to online. This goes hand in hand with private, we know that we are only talking to that specific person on the end of the line. Nobody is listening in on that interaction. What makes did come unique in these respects is that it achieves the security and privacy without using some sort of third party or centralized service to secure this communication. So it is decentralized there's no centralized identity provider will be in your information did come is interoperable this relates well with the decentralization piece. It is not locked into a specific app or operating system or blockchain in order to use did come so it operates well between existing systems. Did come is transport agnostic. It is not dependent on the transport to make sure that the message is secure and private. You can use HTTP web sockets Bluetooth email even carrier pigeon as the running joke in the community goes. And did come is extensible so you can add messages to existing protocols, you can even add additional protocols all together. There are a few more properties worth mentioning to come is message oriented. And so it has one receiver or one sender and one or more receivers. And so when receiving a message the receiver can decide to take or not take action in response. Did come is asynchronous. It ensures that the receiver of the message is not required to respond in real time immediately. This is similar to email where when you receive an email, you're not required by the technology to respond in real time. Thank goodness. Lastly, did come is routable. The message through another party so sending a message from a to be can be routed through see or as many intermediate steps as you want. However, any agents in between the the sender and ultimate receiver cannot view the contents of the message it is end to end encrypted. We often see this in the case of a mediator which Daniel mentioned earlier agent that routes messages to another agent and can hold messages from that agent is offline. And so relating back to the transport agnostic property of did come the transports can be different during different segments of the route. And I'd like to touch briefly on the interactions between these, these properties between the transport security and interoperability properties. So the security of did come is not dependent on a given transport that security is taken care of by the did come message itself and does not rely on the transports for security. However, transports and routing can augment security and usability. So for example using HTTPS, many agent many current agent implements can benefit from using perfect forward secrecy. This is a method that ensures that even in the event that private key is compromised, an attacker could only decrypt messages information from that single session, rather than the entire history. So it also removes reliance on a single private key. And it is a long term goal to implement perfect forward secrecy within did come itself. It's also important to note that requiring specific transports can damage interoperability. So even if you have a favorite transport that you like to use requiring one means that yours, your solutions can integrate well with the rest of the system and people are encouraged to take advantage of technologies through protocols using protocols rather than transports to make these interactions more efficient. So to step back just a bit and look at the big picture will look at the layers of did come so at the lowest layer we have the transport. As we've said this can be any transports moving one layer up dids are very important to securing communications in particular peer dids are essential for did come. Dids are exchanged directly between parties and are not on the ledger. Unlike public dids which are published to the ledger. And so participants exchange dids and using the signing and encryption keys and service endpoints information resolved from their did docs. This enables secure and to end communication. And so, moving up to the top we have did calm protocols which we will talk about now. This is a phrase coined by Daniel Hardman a did come protocol is a recipe for a stateful interaction. So this means that did come is essentially a state machine that provides definitions of states and the transitions between them. In addition to that, along with the state machine is the events and messages that cause transitions from one state to another. Many of the layers below did come have their own protocols, including TCP HTTP managing dids to be helmet, etc. But did come protocols are different from those lower, lower layer protocols did come protocols build on top of those lower layers to define protocols with real world social value. Allow you to do things like forming connections requesting and issuing credentials proving things buying and selling, etc. So did come is sometimes compared to rest. So we'll take a minute to look more deeply at that comparison. Rest stands for representational state transfer. Did come is defined as message based stateful interactions. Rest uses a client server model where there's a hierarchy between the server and the client's server is generally considered to be the source of truth, and the client is dependent on the server. Whereas in did come the participants in the exchange are peers by defaults with roles that are defined by a protocol. So in a particular scenario of a protocol. One particular participant might take on a role that means they're trusted to be the source of truth for that particular aspect but at a basic level did come doesn't have that same hierarchy. Rest has the one request one response system. Whereas did come. You can exchange as many messages back and forth as necessary to complete the interaction. Rest uses HTTP methods to indicate which action should be taken. Whereas did come, as we've said, transport agnostic so it could not be dependent on HTTP methods. The action is determined by message type. Rest uses pads to indicate action did come uses message types which we'll talk about more for an indication of what action should be taken. And finally, rest uses pads routed to handlers did come uses message types to route a specific handler for a message. So, now we will get into ingredients for this recipe for a stateful interaction into the components for a did come protocol. So, we have of course the name and version, we have a unique UI that identifies the protocol, a, the message type name, which refers to the messages included in this protocol. We have roles that the sender and receiver can take on in the protocol. State and sequencing rules can be thought of as a state machine of how these messages and roles are expected to proceed. We have the events and constraints that provide trust and incentives. So, for example, maybe in the specific protocol, you require that parties have exchanged credentials to verify their organization. This gets more into the business logic side of things than than the did come protocols themselves. The protocol that we are going to look at today to start off is basic message this protocol is simpler than other did come protocols so it's a great one to look at as an introductory example. And so we can look at what the components of this particular protocol are. So we have name, basic message, and version is 1.0 version roughly follows the semantic versioning specification. Here we only include the major and minor version though those first two numbers. Since the third number is the patch of a version that does not impact the external API and since did come can be thought of as an external API we can leave it off. The URI is HTTPS.com.org basic message 1.0 so putting some of those pieces together. In this case, in terms of the message type names there's only one message with the name message. A little bit confusing we'll see how in other protocols, the message type name is not going to be message itself and there can be more than one message type. The roles here are sender and receiver. There are no real state changes going on. The events are simply send or receive and there are no constraints. So let's take a look at what this message looks like. We will look more closely at this type attribute. So type is a URI that resolves to documentation for this message and it helps the receiver of the message identify what to do in response. So we'll take this apart and see where the different parts come from. The first part is the document URI. This serves as an identifier for the body that's created or standardized the protocol and here it's an official didcom.org protocol. The next part of the string is basic message. This is the protocol name. Then we have the protocol version and then the message type name. Again, confusingly message itself. Everything before this message type name is called the protocol identifier URI and then as a whole this string is referred to as the message type URI. Now that we've unpacked type, we can look at the other attributes in the message. The message type determines what attributes are expected to be present in the message. The ID is a universally unique identifier for the message. In basic message we also expect to see a localization decorator, send time attribute and a content attribute. So now we'll look at a more complex example. From the issue credential V1 protocol, this is the offer credential message in particular. So we can recognize pieces of the type attribute. It has the same document URI, didcom.org. Issue credential is the protocol name. It's version 1.0 and then offer credential is the message type name. And then we can see the attributes expected for this message include comment credential preview offers attach. So that's a more complex example. So there are two layers of messages that work together to enable interoperable communication through DIDS. So those are agent messages. This is typically what is meant when we say message. And then the encryption envelope, which is also called the packed message. So agent messages are sent between identities to accomplish some shared goal. This could be establishing connections between identities issuing and presenting bearable credentials. And then messaging the examples we've looked at so far of basic message and offer credential messages. Those are agent messages. And then a packed message is a wrapper around an agent, enabling messages to be securely sealed while in transports from one agent to another. So you can see how this could easily be referred to as an envelope or encryption envelope. These use authenticated or anonymous encryption. Anonymous encryption would be where the sender is not able to be known by the receiver. That's only used in limited circumstances. We typically use authenticated encryption. Packed messages are end to end encrypted, as we have mentioned, even if a message is routed through one or more other agents on its way to the final destination. Whatever agents in intermediate steps cannot access the contents of that message. All they see is the outside wrapper encrypted packed message. And this encryption envelope is a derivative of the JSON web encryption specification. There are ongoing efforts to further standardize this encryption envelope to better align with preexisting standards. So back to our example, agent message, we will see what it looks like to transform it into a packed message. So we will pass it along into a pack method that takes this the plain text agent message as the message parameter along with their verification key and my signing key. And so the verification key refers to the public key in an ED 25519 public private key pair. The signing key is the private pair. And so from this operation results a packed message that looks something like this. Some of these values get really long so we've cut them off for brevity. And so the the ciphertext attribute is the encrypted agent message itself. The IV is an initial vector that is used in the encryption and decryption of the cipher text. Protected is a base 64 encoded JSON objects containing headers for the packed message. And then tag is ensures that the the protected headers have not been altered between the sender and the receiver. So everything that we've been talking to regarding did come so far is about version one. So version two was ratified this past summer and adoption is rapidly advancing. It's important to note that most protocols work in both version one and version two context a few require modifications but mostly they are interchangeable. Again, they work fundamentally in the same way. That's why it's been relevant to still talk about the one, but the two does have some significant improvements on the one in that modifications have been made to allow it to be more widely applicable outside of the original hyper ledger areas community where did come the one originated just a few distinctions to call out. One is the message structure split between headers and body these next to go hand in hand the senders did is included in every message and using those keys you can resolve the key info. So peer dids are now resolvable and contain the information that would have previously been included in a connections invitation. And so this means that the handling of peer dids and did exchange are no longer needed. So we can look at a side by side of a did come v one versus v two message this is a an example of another protocol, the trust ping message this is a standard way for agents to test connectivity responsiveness and security of a pair wise channel. So that there is a from attribute in v two that contains the senders did and then we can also see that the body is split out. All right. So there was a lot of information on did come. That's the information that we have prepared before we move to a brief intermission just wanted to. And we will next move into the demo portion of the workshop, but first just wanted to see if there were questions that have come up. I have a question. Is it possible to have agents, I have these protocols mixed in environment so you have did come one and come to you have agents talking one and agents talking to and does that work in a joint environment. I believe so since most protocols work in both environments but Sam feel free to jump in with with more nuance there. It's correct did come v two was designed so that you can detect an inbound message whether it's version one or version two. So we, the Aries adoption of did come v two is underway. And we expect most projects, all projects that I know of that are that are going on will will have a period where they accept that both did come be one and did come be to, of course, after some period of transfer then did come be one will be deprecated and did come be to will will remain in production. Thanks. So interesting question in the chat. What are best practices for storing did as a holder. I think a lot of the same recommendations around storing keys just private keys apply as well to storing of did and their private key material. Within the hyper ledger areas ecosystem we are fond of the usage of areas as car as a secure storage implementation. I should turn on my video sorry. Which, you know, takes all the appropriate measures to zero out memory after a private key has been used and so on. There are, of course, other secure storage solutions out there, and the SDK is still pretty widely used. And it even has a slightly more strict key interface that that means that you're not even able to pull the private key material out of your own wallet it is acting as a software secure enclave is the term that's been used for it. We are in the process of phasing out in the SDK though, but it's still definitely a valid mechanism for storing those secure values. I think we also see hsms as a highly recommendable piece of hardware to be using in the future, especially for the cloud context where hsms are already a you know a frequent fixture of security in those contexts. The primary issue we have in that sense is that a lot of the protocols. Sorry protocols is an overloaded term a lot of the cryptography that we're currently using is has not yet bubbled all the way down to the hsms layer, there are some notable exceptions. And I'm aware of more than one effort within the community to enable hsms support within the context of did come and areas more generally and and anonymous credentials. But yeah. Best practices store them as securely as possible. If you do not have experience in that in that area highly recommend relying on a solution like areas ask her. Another question that came up in the chat is a direct message but I feel like it's worth. Many people could benefit. It's a good question so do we need blockchain to implement this did come itself. I talked earlier about how we use peer dids to communicate so it's peer to peer communications those dids are not going on the ledger. We are not interacting with the ledger when we are using did come. You'll see in the next sections when we get into the ways that we use variable credentials. That is where you do need blockchain you do need certain information that is public on the ledger but using peer dids through did come. You do not need blockchain to implement that. Cool. I think now is a good point to officially have an intermission. So if you'd like to continue asking questions in chat please feel free, but let's plan on coming back at about five minutes. So, maybe at the 35 minute mark. That sounds great. All right. Welcome back everyone. Once again, hope you're able to stand up stretch your legs for a moment. As we get back into our hands on portion. So, we're going to be doing a bit of a tag team between myself and char. I'll take us through some of the hands on portions. And you can follow along at home for that. And then I'll pass back to char to discuss a little bit more of what we just examined or discussing the next portion that will be coming up. So I will go ahead and share some of my screen for that. Real fast before I get into things. And I apologize for my repeated bringing up the handout. The handout is here it is a good resource as we go along into this hands on portion especially I'll be running these commands. And there's other useful resources linked here as well. Again, I will be doing my best to repeat the instructions as I go along. So the first time around don't be too worried about making sure that you're following along at the right speed or anything like that I'll go back in and repeat the instructions. Let's see. Also, the presentation will be shared after the fact so if you want to go through and check out the slides. We have actually included screenshots of all these steps on the slides. So you don't necessarily have to watch the whole video just to see what step we took on on certain interaction so that that will act as a resource for you as well after the fact. So, let's see, I'm going to keep my browser there just let me get my windows arranged slightly. We are going to start off by getting the areas toolbox running. So, first we need to navigate to, you know, open a terminal navigate to the directory where we have cloned down the areas toolbox repository. In my case this is in the areas toolbox folder that is likely different for you. And then let's do a few things just to make sure that we've got the most recent code available to us. So we'll start off with a quick poll. If you're already up to date. Excellent. If not, then we should be now. We can then do a quick npm install. I've already done that previously so I won't run through and wait for this command again. But then after running the npm install, we can do an npm run dev. And that will bring up the electron window. After doing a few intermediate steps here. So let's wait for that to come up here in just a second. And here we go. And I've got some old connections to go clean up there. Okay, so after running the npm run dev you should see a window pop up that looks something like this. If you have technical difficulties, I encourage you to reach out on the discord channel for areas and we've got people standing by that can help you out as much as I can. No promises that I still be able to figure everything out, but I should be able to get you through some of the most common areas at least. And shout out to Colton and Lynn and all those who are helping answer those questions in the chat. Cool. So now that we've got the toolbox window open, we are ready to open up another terminal window. I've got a separate one here. This is the areas cloud agent, Python agents that we will be pulling up to accompany our, our toolbox instance here. So first we start off by changing directory into the location where you've cloned the ACPI plugin toolbox repo. Again, this is likely to be different from my location but this is where I've got it. Inside of this folder there's going to be a demo folder. We need to change directory into that. And then from here we are ready for our next command, which is pulling up the Docker compose. In my case on my machine I have to use pseudo. Good chance you don't need to use that depending on your environment. I will be including it here. So we do Docker compose dash F which allows us to specify the file that will be running. In this case we're doing the Docker dash compose dot Alice dash Bob dot animal file. And then we'll do an up dash dash build on the end of that. Again, these commands are inside of the handout if you'd rather just copy paste those values. Right, I'll give that just a second to pull up. If you're running this for the first time, you will have to wait through an image build for the toolbox image shouldn't take too long. And then before long you should see several containers started up here. And some really large QR codes. These are convenient if you're connecting to a mobile agent first, for instance through a automatically generated connections invitation. We use this quite regularly when we're testing out areas by fold for example. There's also other ways to get a QR code representation of our invitation URLs. So I think it's through, excuse me, a quick explainer on the containers that we're looking at here. We see an agent Alice and an agent Bob, and these are the actual act by containers. The ones that are critically performing the functions of an agent. We also have a couple of tunnel agents. These are just a supporting feature for this document post script by including the tunnels we make it really easy for us to be able to connect from a mobile agent which isn't going to be on the same machine. And it's certainly not going to be within the same Docker networking namespace. So by including the tunnels, we are able to connect to external devices and can even connect across the internet. So that is something to bear in mind. This is opening a tunnel to a container on your local machine. There's also a local Docker compose variant that allows you to skip that part if you prefer to do that. Okay. There's also a tail server included in here that is a piece of our infrastructure that you need in order to do a non creds revocation. We will not be doing that today, but this can this config has all the pieces that you need in order to do that. Okay, so now that we've got our agents started up, we are going to connect our agents to the areas toolbox. So before the giant QR codes. There is a line that should look like this it says invitation URL connections protocol. We are going to copy that URL and paste it into our toolbox window under where it says new agent connection paste that in and then connect. We'll see a little bit of output from whichever agent we copy pasted first, which indicates that the toolbox and the agent are now connected and then we should see a little card pop up for Alice. We now need to repeat that step with Bob here. So I'll copy that based into the new agent connection. It connects. And now we see both Alice and Bob admin connections available to us within the toolbox. So just our hands on portion will be using toolbox windows that are opened by clicking the open text underneath each of these agent cards. And I'll try to arrange my windows in such way that you can see some of the values at the same time at least make that a little bigger. So these are our Alice and Bob windows. And as we were discussing earlier, these are more or less a window into the agents that are running within the Docker containers. Acropy doesn't generally provide a UI on top of the agent. That is something that we are providing in the toolbox in order to help us click around the interface a little bit more easily and be able to go through these sometimes quite complex interactions with minimal headache. Acropy does include a the admin API as we discussed and if you navigate to the admin port for Acropy, you can see a quite convenient swagger UI which gives us the raw admin API endpoints. You can complete all the same operations that I'm doing with the toolbox here in that UI using the REST API of the admin API. You can do more copying and pasting and know how in order to do that. So we like to use the toolbox for demonstration purposes. So you don't have to worry about all that. But yeah, there's there's no reason you couldn't recreate this, the same UI using the REST interface. We just find it quite useful and convenient to be able to do it through a DITCOM interface provided by the Acropy plugin toolbox. So these are the windows that we will be using. Let's take you through a brief walkthrough of the UI just so you can have a little bit better orientation before we move on to the next steps. So first, the nav menu on the left, this is where we access most of the functionality here. So this first section where it says toolbox to agent, these represent the direct protocol implementations that we referenced earlier. These are interactions that take place directly between the toolbox. And in this case, the Alice agent in this window, it'll be directly between the toolbox and the Bob agent. So if we click on discover features, for instance, what's actually taken place here is that the toolbox has sent a DITCOM message called the future discovery query message, which is asking the agent to report the protocols that it supports. So we see there's a couple of legacy iterations of these protocols, but we also see the DITCOM.org version of those protocol identifiers. So by looking at this list, we can see that the connected agent supports coordinate mediation 1.0. It also supports the introduction service, also supports discover features 2.0, routing, issue credential 2. It's a comprehensive list of all the features that are implemented by MacPy, where you will see repeats between the legacy protocol identifier and the current protocol identifier. These are representing the same protocol implementation, however. Another key point to call out here is that we also see this hyperlageries toolbox docs admin basic message protocol. So in contrast to these DITCOM.org ones, these are not an official DITCOM.org protocol that could be, but they're not currently. So the documentation in there for the document URI points to where you can find docs for these messages within the GitHub repo for the Aries toolbox. So the Aries toolbox defines these protocols that allow us to remote control agents that are connected to it. So the admin basic message allows us to control the basic message functionality. The admin routing allows us to control the routing. Admin transaction author agreement allows us to do stuff with the transaction author agreement and so on. All right. So there is that that's the discover features. We also have the ability to send a basic message directly to the agent from the toolbox. It's really quite helpful. Often when you're creating an agent for the first time basic message is the first protocol that you might implement just because it's basic and easy. So this is a useful way for us to trigger a message being sent to an agent and see if it was able to be correctly decrypted on the other side. If I did this here now it will trigger a somewhat confusing interaction that I'll explain a little bit more later on so I won't go into the details of that. So this is us sending a message directly to the agent. We also have a compose window which allows us to manually construct or copy paste in the plain text of an agent message to be sent to the agent in the background. This is actually arguably one of the most useful features of the toolbox in my opinion. All these functions are great and help us do things quicker. But this is one that's like very unique to toolbox. We actually have the ability to directly construct a message and send it with the proper encryption and all that stuff to the agent in the background. Since did come operates at this generally at the agent message level, but then must be in transport encrypted. It can be difficult to get to that encryption step quickly. So the toolbox is a vehicle for us to be able to worry about the the plain text of the agent message without needing to worry about converting it to the correct encrypted envelope to be sent off to the agent. And then included in this compose window we see the five most recent messages that were sent and received by the toolbox. So we can see that there was a basic message new message that was received recently. I'm going to go ahead and trigger this trust ping message which was automatically populated in the compose window just as a quick message. And we see that a ping was sent and a ping response was received back from the agent so this was sent by the toolbox. This was received from the agent in the background. And then we can also view the full message history here we choose and this give this more gives you a view of the internals to how the toolbox and the agent are interacting as opposed to what's going on between Alice and Bob, as we will go on to connect and trigger some interactions there. So this is just between the toolbox and the agent. And by analyzing these messages we can get a sense for what messages are being exchanged between Alice and Bob, but we do not have a direct log of those here. That's a feature that would be good to add we still haven't gotten to that point yet with the toolbox. Yeah, so that's a quick walkthrough of the toolbox UI just as a quick review if you are on the connection steps still, you can copy these URLs paste them into the new agent connection box and hit connect and that should get you these little cards here that represent Alice and Bob, and then the open text will open up these windows that I'm using here. And with that, I'll pass it back to shark for our next steps. Great. Thank you Daniel. So now that we have started up the toolbox and the agents, we can move on to connecting them. I'll talk about what's going on the background and then pass it back to Daniel for the live demo. So we will start with forming a connection between our agents Alice and Bob. Alice and Bob both have their edge agents their mobile devices, and then their media mediator agents that are acting on their behalf. We've mentioned mediators a few times before. In this example scenario where Alice and Bob are using mobile agents in media actually is required for them to receive messages since the edge agent itself will not have an endpoint. And we're also dividing between Alice's domain versus Bob's domain. What we mean by domain here is that these are the devices owned by and under the control of Alice versus Bob and so for simplicity we can abstract away the multiple agents in each domain and just consider Alice and Bob. So here Alice is going to create an invitation as Daniel said this can be a QR code and invitation URL. This invitation is out of band, which means that it's not communicated directly through did calm because that connection channel has not yet been established. So this invitation that Alice sends to Bob has an initial connection key endpoint info routing keys and a label. So the initial connection key is used exactly once just for sending that connection request message back after receiving this invitation. The endpoint information and routing keys are sent to make sure that that Bob can send a message back to Alice. The label is who Alice says she is this is not a verifiable attribute more of a suggestion and a convenient way to identify this invitation we can absolutely trusted and identity verification should happen after the connection is formed. And so here's what this invitation looks like as an agent message we might recognize some of these pieces from earlier. The message type uri shows that this is an invitation message within the connections v1 protocol. And then it has that information that I mentioned the label keys service endpoint. So after Bob receives this invitation, if he wants to connect with Alice he sends a connection request message back to Alice's agent. The language here might seem a little bit unusual that Alice sent the invitation and Bob is sending a request to connect. In response it's going to know that they are peers in this interaction and there are no imposed requirements so Alice can send an invitation to connect with Bob and give information on how to reach her. But Bob can choose to send or not send a request to actually connect with Alice so it's a request to elevate the invitation to the level of a connection. And depending on how the agents are configured these steps can be automatically accepted so it might feel more like a simple interaction of accepting an invitation now you're connected but even if it's automated. This is what's going on in the background. So this connection request contains labels and similarly the suggestion of identity. Bob's did, which he will use in the future to secure his communications with Alice and his did document or did doc for short, which contains relationship keys routing keys service endpoints, all the cryptographic information to secure future communications and allow Bob to receive messages from Alice. Here is what the agent message of the connection request looks like. And here we see that includes the did and did doc. Now Alice sends that information from her side back in a connection response message. So this includes Alice's did. Alice will use and identifying her future communications with Bob and Alice's did doc, which will allow them to communicate securely and allow Alice to receive messages from Bob. So this is what the connection response agent message looks like there are additional identifiers to make sure that Bob can correlate the request he sent out with this response and the connection data that Alice sent to Bob is signed using the key Alice initial invitation so Bob knows that the this response came from the original creator of the invitation. So now that's the dids and did docs have been exchanged this connection is established. And a quick note on forming connections using did come. As we talked about before these are peer to peer connections, but they are not necessarily direct we can use a third party like a mediator. Whatever third party again cannot see the contents of that message because it is end to end encrypted between Alice and Bob. The connection is not a transport layer connection like socket or open HTTP requests that are it is a did come connection. And then the agents are connected and the interaction is considered complete when Alice and Bob have exchanged dids keys and service endpoints that information held in the did doc for future communication. And so, now that we've looked at what's happening in the background informing account connection I will pass it over to Daniel to show what that actually looks like using the toolbox. All right, so up to this point we've talked about the direct protocol implementations. The discover features and these over here, and we have not yet touched the agent agent interactions. So once again, this is the toolbox telling Alice the remote agent to do something with another agent potentially, or to trigger some other action directly on the agent. So we will start off by taking a look at the invitations tab here on the left. So the invitations tab allows us to create a new invitation, which should hopefully be pretty self explanatory. There is a perhaps rather complicated looking form here. A lot of these values are of interest, primarily to developers and those who are working on agents for our purposes today we do not actually need to modify any of these values. So the default values of the form are perfectly acceptable. But we do have the ability to instruct the agent to assign an alias to the connection in terms of our internal representation of this connection so we can easily identify it later. We can specify what the label should be in the invitation, assign some other metadata to it and optionally selecting whether the connection should be formed through a mediator. Again, we won't be modifying any of these values. We do want to make sure that the auto accept is flagged as true. So it'll be the blue background on that one. And then we are ready to create a new invitation. So when you click that button you should see some activity going on in the background as we make requests to the agent and trigger different interactions. After clicking that create your invitation button, you should also see a list item show up in the invitations list up above. If you do not see the list, it's possible that you've got a connection timed out or something you can refresh the list by pressing the refresh button manually. By clicking on the list item we get a little bit more detail on what's included in the invitation. We see some of the properties of the invitation that were assigned on creation we say when it was created. We also have the ability to create a helpful qr code that allows us to scan this from a mobile agent if we choose. In our case we will not be using the qr code but rather clicking the copy URL button, which will show us that the invitation has been copied to the clipboard, and then we can sneakernet this value over to Bob for acceptance. You can also expand these curly braces here to see a little bit more of the raw values that are being passed back and forth between the toolbox and the agent in order to represent this invitation. So that's of interest to you. So, in connections on the Bob agent now we see a list of active connections. We can actually do the same on Alice as well. And we see that both of them have a connection to the toolbox. And that represents our window here actually connecting to the agent and it's communicating over a did. On the other end of the connection which is the toolbox. It's a little confusing since we are both observing from the toolbox, but we're observing ourselves in third person so to speak. We do not need to worry too much about the details of that connection at this point though so we'll, we'll go ahead and just accept that as it is. And then we will copy the invitation URL into this box here on the connections tab within Bob, and then click the add button. That was rather rapid transition from pending to active state, but there was a period of time where it wasn't the pending state there. It then automatically advanced to an active connection as the two agents exchanged messages asynchronously in the background. So it's possible that if you're connecting to a mobile agent for instance where it's not scripted to automatically accept the connection after scanning the QR code. It'll wait for user input before it sends the connection and then at that stage you would see a pending connection for a little bit longer, as we waited for the other end of the connection to accept the invitation. And then see our connections on either end. So Alice has a connection for Bob can see his did, or is that in my did. And we see that the state is active, we can again inspect the raw details of that connection record if we choose. Likewise, on the other end for Bob. So we'll see the two rules are, you know, inverted to represent the other end of the connection. And again, you can expand and do those connection details. So Alison Bob are now connected. And I will repeat those steps just again briefly in case you're pulling along at home so I'm going to go ahead and delete both of these connections, just so I don't get confused later on. And then we will go to invitations you can see that the original invitation disappeared. That's because it was consumed along the way by default these are single use invitations. You can specify a multi use invitation if we'd like to and then that enables multiple connections to be formed with the same invitation. Multi use invitations are especially useful for, say you're a mediator and you'd like to enable multiple connections to connect to the same invitation without having to regenerate one for each client that connects to us. You can make that multi use invitation posted on a web page and then anybody who would like to use your mediator after going through your, your paywall or whatever you have in place. They can access that invitation and then form a connection with your mediator. I'll go ahead and make a single use invitation now. And same process will expand the list item. The URL should see a little notification that tells you it's hit the clipboard. And then we can paste that value into the invitation URL box on Bob's side in the connections tab. Click add. And again, went from pending to active. We check out the Allison we see Bob has been restored in our connections here. Allison Bob are connected. We have the ability to exchange messages. We'll get into more interesting messages as we go along here. But let's take a quick look at just a basic message example being sent between Alice and Bob. So if you click on the messages tab here in the agent to agent section pulls up a very rudimentary kind of chat window between Alice and Bob. You have to select which connection you're, you're talking to. So now I'll collect our, excuse me, I'll select Bob. And now it represents a chat window to Bob. So I can type out a message for Bob hit send. And we see that the message was sent. We can see the timestamp and we see a little notification popped up on the other end. That indicates that a message was received from Alice. So if we go to our messages on Bob's end as well. We select the Alice conversation. We see Bob show up in the list, or the message from Alice show up in the list, rather, and we can respond. And we see in real time these messages showing up back and forth between the two. So these are our basic messages being sent back and forth between Alice and Bob basic message is itself a protocol as we've looked at previously when char was going through our did con conceptual section. And it's a protocol that represents a basic messaging instant messaging protocol between Alice and Bob. Particularly sophisticated messages. It's just basically the content itself and the, the sent time. But there is, there are ongoing efforts to make more rich messaging systems out there in the world. So that's what we've got here. The messages are not going to be used for anything other than just real world human messages sent between Alice and Bob. So we've gone through connecting Alice and Bob we've gone through a quick example of sending basic messages. These were all taking place using admin protocols in the background here. Which enable these interactions through the toolbox, and you can look at the message history in more detail if you'd like to see some of the internals of what's going on. Yeah, there we go. So I'll pass it back to shark to talk about our next steps. Thank you Daniel. So now that we have connected the agents we can practice issuing and verifying credentials between them. As we jump into credentials at this point in the workshop. I want to mention that we are focusing on the in the non creds credentialing system in this demo, which is Sam mentioned earlier is hyper ledger projects with this nice new logo. A non creds enables your knowledge proofs and selective disclosure and revocation. These are important privacy preserving properties that we will describe and demonstrate in this in this next section. So in this workshop is also capable of W3C credentials, we are not covering that in this workshop, but in DCO is hosting a meetup later this month on the 29th on using acopi to issue and verify W3C JSON LD credentials so please sign up for that. In the chat in the link and there's a QR code as well. So the issuer is the entity that decides which credentials to issue to whom you might think of how a university issues a diploma to a graduating student, or how a business might issue a credential to an employee to give access to facilities for example. In order to issue credentials, the issuer interacts with the holder was the person or entity receiving the credential, the graduated student to the new employee in our examples. The issuer writes to the ledger so they put certain public information related to the credential on the ledger so that when the holder wants to prove information that they have this credential, a verifier can read this information from the ledger and make sure that the credential was actually issued by this issuer. As Scott talked about earlier but just to really emphasize that a common misunderstanding is that personally identifiable information is put on the ledger. On the ledger this is explicitly against the agreement that anybody who writes to the ledger must agree to. It's only the public cryptographic information necessary for verification that goes on the ledger, and we'll go into depth more on will show the steps and and what's happening there. So we'll talk about the issuer in context this diagram, we will build it up as we demonstrate issuing and verifying credentials it should look a bit familiar from Scott's present Scott's section earlier and so we are using these three entities the names are intuitive but just to be super explicit since the roles are very important. The issue where issues the credential that's DMV university and employer etc. The holder holds the credential this is the subject associated with the credential and the verifier verifies the credential when the holder presents it. So we'll look at the first steps that the issuer takes to issue a credential. So they need to write a did schema and credential definition to the ledger. And then they sign and issue a verifiable credential and now the holder holds that credential. So, these are the same steps broken down, because we're going to look at each of them more closely and demonstrate them. So in order to complete the issuer's first step we need to anchor a public did to the ledger, write a new schema to the ledger or we could retrieve a pre existing schema from the ledger, and then create credential definition right that to the ledger. And then we are able to actually issue that verifiable credential. So, looking at that first step a few things to call out before we can anchor it did. This is calling back to what Lynn said earlier, there are multiple roles when interacting with the ledger there's an author who can create transactions and issue credentials but they need an endorser to endorse their transaction for to actually be written to the ledger, and they must have their did on the ledger. The endorser can endorse transactions to to put them on the ledger and their did must have endorser status on the ledger in this demo our issuer will have both the endorser and author privileges so we'll create the transaction and also endorse it have written to the led to the ledger. And before anchoring it did we need to have signed the transaction author agreement. We are agreeing, like I said, to never puts private personally identifying information on the ledger. And so with that, I will pass it over to Daniel for a demo of those steps. Cool, thank you sure. There were a couple of questions in the chat that I want to briefly address before we go on to talking about the dids, just to kind of raise them up into the call and talk about them a little bit more so first one here. Next, what is the limit and cause of the limit on the number of messages that can be received per second. That is an excellent question so to give slightly more detail on the messages being exchanged between agents. So these are our messages that are being. We have known public key asymmetric key values for either end of the connection. We then use those keys to derive a shared secret. And then that shared secret is used to symmetric encrypt the message contents into a ciphertext that get that is then passed over the wire. So the, the rate at which you're able to construct and encrypt these messages is kind of dependent on the hardware you're using and what crypto facilities are available to that hardware. So it's basically more complicated for an IOT device at this point for instance to engage in the same crypto that we're using right now on in this container. But the encryption I think tends to be the upper limit for how we can, how many messages we can emit per second. I don't have any hard numbers on that because it is a highly variable thing depending on your, your configuration your setup. And generally speaking, it tends to be more the processing of the message contents and whatever actions are triggered by the messages that are actually going to be more significant. So for instance, in order to issue a credential. You know, several quite CPU intensive signing operations and key generations and other operations that are taking place as a road as a result of a message being received and prior to a new message being sent. So raw messages per second isn't really a common metric that I think we throw around within the hybrid areas ecosystem but it is certainly an interesting metric to have. I am also kind of address this question already but again just to raise it to the recorded video just for the benefit of others. How would we go about going beyond one to one communication. So this is a subject of ongoing conversation within the did come working group about the, the right way to handle group communication. And the one specification technically allows for us to send messages that are encrypted to multiple parties. But that's a, a, I think generally suboptimal in the wider context of group communication. We only have, I think mutual authentication from one point to the next and when we go to group authenticate group messaging. There's some cryptographic breakdowns in that certain group exchange. So there's ongoing conversations there. This is actually quite a recent thing that's been seeing more intense discussion so if you're interested in being more plugged into that highly recommend listening listening in on did come working group and user group calls, which occur on on Mondays, and you can find more details in the diff calendar for that. Yes, one other thing I will briefly discuss as well. Using did come, you have the, you have a very multi purpose useful secure mutually authenticated communication channel, which means that we have pretty strong assurances that the person on the other end of the line is the person that we think they are, which is in this case with a traditional client server model we only have strong authentication on the server side and not on the client side, unless you have clients which is a whole other thing I guess. So making taking advantage of this really valuable cryptographic pairwise connection that we have, we can negotiate communications through other existing communication channels. So an analogy that I recently explored on the subject it's like you have messages being sent by mail going back and forth between Alice and Bob as an example and say Alice and Bob are sealing their envelopes with their, you know their special wax seal or whatever. So they know that it came from Alice because that seal is as expected analogy breaks down if you look at it too closely but it holds for a you know high conceptual level. It's basically the equivalent then negotiating a separate communication channel it's basically the equivalent of saying in a letter from Alice to Bob here call me at this time at this number, and we'll have a live communication over the phone. A similar process can be used within the did com road to bootstrap communication through another mechanism, and then you can also negotiate using did come, you know whatever security or other secrets that you need in order to make sure that you're actually the person you think you are once you switch communication channels to that other negotiated system. So yeah, there's definite opportunities for bootstrapping communications using different methods and such. Cool. Okay, so with those questions addressed, returning to anchoring dids and preparing for issuing of credentials as a result so first, we will be using Alice as our issuer for these interactions as sharp talked about the issuer role is a role. It can be. It's not strictly required that any particular kind of agent fulfills a role. It is a little weird that we have somebody that we are talking about as an individual being an issuer that's not typically how it works usually it'll be a public organization who has good reason to put public values on the network. But in our, for our purposes will use Alice and Bob as issuer and bear and holder, rather, just for simplicity in our demonstration and setup for today. So this will be needing to create a did and then submit it to the network that we're using for demonstration today so in order to create a new did we'll go to the dids tab and before I click on this I'm going to briefly prime you for what comes next. So dids require that we sign the transaction author agreement required by the network. So this since the dids tab has some operations that result in transactions being submitted to the network before we are able to submit any values to the network we must first accept the transaction author agreement and make sure that that acceptance and a representation of that acceptance is included in all subsequent transactions. That's why I handles most of the details for that of that for you. It's just a matter of, you know, hitting the right end points to store our acceptance within the wallet, and then by ensures that those are all included on transactions thereafter. So when I click on the dids tab, it will pull up a pop up here that says transaction author agreement using this module requires signing the transaction author agreement. So if we click through it continue. And then see the text of the transaction author agreement. I can have fun reading through the agreement there. I shouldn't mock too much it is an important agreement you should read these and understand what you are agreeing to and writing these values to the network. Basically, it boils down to that you will not put anything that's inappropriate to put on the ledger on the ledger, such as personally identifying information and such. So when we've read through that we can accept the terms of the transaction author agreement and hit submit. And now we are ready to interact with the dids tab in the toolbox. So there are three dids that already exists within our agent. These dids represent a pair wise connection these are the dids that were created in order to represent the connection with the toolbox. Our first connection with Bob and then our second connection with Bob. We actually deleted the connection value itself but the did is still pure and available in case we used it for other purposes. But yeah, these are our did values. In the future these will be explicitly pure did values at the moment these are something resembling pure dids. There are values that are not written to a ledger anywhere. And there's a deeper discussion on that that I won't get into at the moment. That's why we have three dids available here within my view. We will then create a new did. We will then assign a label to the did just so we can easily identify it as the one that we've written to the ledger later on. So I'll say that this is labeled as public. You can explicitly set a did value. This is generally not an encouraged practice. This will actually be explicitly prohibited in future iterations of the did method specification, especially as we transition to the indy did method as a community. That is possible. You can also specify the seed similar to how Lynn was talking earlier about when using the indy CLI you can create a did and specify a seed. You might not have talked about that actually I might be remembering wrong. But you can specify a seed and this allows us to recreate the did in a separate wallet later on if we need to, which is sometimes a useful feature, especially as we bootstrap new issuers on a network. I will leave the seed empty. However, we do not need to deterministically generate this did using a seed value. So next I will hit create did and then our did pops up in the list. We can expand these list items. We see that this one has a did value a metadata with a label of public associated with it, and then a very key value. Okay. So here we've created a did. But it currently exists only within the wallet. It has not been written to the network yet. So in order to get it onto the network we require the assistance of another entity that already has permissions to write to the network, which in this case will be the DCO self serve application. The link to this is in the handout if you're following along with me also paste it in the chat here. But this is where we can take our did in our very key and get that submitted to the ledger with a role that allows us to write our subsequent transactions ourselves. So I'll copy the did value base that in here. And I'll copy the very key value and paste that in there. And then we hit submit. We're not in the interface. Sorry. We're not seeing the interface. I don't think the tool that you're talking about this in the self sir. Interesting. No, I wonder why that is one second and I will rectify that. No, did it pop up. Now we can see it. Now it went away. That's really strange. Okay, just a moment. I will bring up a separate browser. Give that a try. I'll make that a little bit smaller. And I will get that URL placed in here. Thanks for the heads up on that one. Yeah, so here is the self serve page. Trying to hit that corner there we go. I just did this on my other window which isn't showing up for whatever reason. But I will mine as though I was submitting it a new. Just got to clean up my windows here a little bit. Okay, there we go. So what I did was I took this did value and pasted it into the did field in this very key value and pasted it in there. And there we go. I hit submit in my other window. It was submitted. I think if I hit it again, it would tell me that the very key was already found on the network. So probably wouldn't be too happy with me. So I'll refrain from hitting the submit button. But we now have successfully written our did to the NDC test network using the self serve tool. And that we've written that did to the network. And since we did it out of band using the self serve tool we don't actually have any explicit indication back on the agent that it's ready for us to activate it. So we need to trigger it to activate the did. So it will use that did for our future ledger rights that are going to take place. That the there's also a drop down here that we can use to select the did to use as the public did. It would have been helpful if it used the label instead of the actual did value. I guess that's a feature request for the toolbox but I know that it was the seven a one so I'll make sure that is selected as our active did. Right. I think we are ready to move on to our next step so I'll pass it back to shark for a little bit more from her. Great. Thank you Daniel. So the next step is to write or retrieve a schema. A schema is the data organization structure that defines the attributes in a credential. In an employee credential for example the attributes in the schema might include last name first name address security clearance level date of hire, things like that. And on the right this shows what a schema actually looks like it has a schema name version ID and shows the attributes in this simple example we just have first name last name age. The schema ID includes the did under which it was created, the to identifies it as a schema, and then it has the schema name and version included in that. And then after we've written a schema to the ledger we can see it on the ledger using the indie scan tool that Lynn talked about earlier. So if you are retrieving a schema, this is where you'd go to retrieve a schema ID in order to use it to issue a credential. So, from the schema step, the next, the next step is to use that schema to create a credential definition, a credential definition creates the issuer specific cryptographic keys necessary for issuing a credential. The schema is a collection of attributes in a credential, but it's not associated with any entity. The credential definition is what binds a schema to a specific issuer, so that when the holder is presenting their credential, the verifier can see that this credential was issued by the specific issuer creating the credential definition generates cryptographic keys for each attribute with a public and private portion. The public portion is stored on the ledger with the schema. This is what allows the verifier to verify that the issuer issued this credential, the private key is maintained by the issuer and the issuer uses these private keys to sign to sign each attribute. You might wonder why we need a key pair for each attribute, rather than one key pair for the credential as a whole. A little earlier, I mentioned that we are talking about a non-creds here and a non-cred support selective disclosure, which is when I present only some but not all of the credentials in an attribute. And so we need to have keys for each attribute because and we will demonstrate this. I might choose to only reveal a subset instead of all of the attributes in a credential. So that's why each attribute needs its own individual pair. We can see on the left what the credential definition looks like. We see that it has a credential definition ID. It shows the schema ID of the schema that it was created from the attributes that we entered in when creating the schema. The credential definition ID includes the did of the issuer, the three identifies it as a credential definition, the CL refers to Komenysh Lyskiansky, the creators of this digital signature, and then it also includes the schema name and version. Here, the did within the schema ID and the credential ID is the same because it was the same entity under the same did that created both the schema and credential definition. If we retrieved a schema from the ledger created by another issuer, we would see that the credential definition ID begins with our did, the did of the agent that retrieved the schema and made it into a credential definition. But that the schema ID included in that credential definition will still include the did of the creator of the schema. And yeah, this goes just to really emphasize what I said what we've said earlier about how no PII goes on the ledger. This is the information that the issuer writes publicly to the ledger the issuers did schema and credential ID are essential to issuing a credential. So what they provide is can be more thought of as a blank template and that is what is publicly available. So the actual values your personal information that you would fill into that template is never available on the ledger. So with that background explanation, I will pass it back to Daniel for a demo of writing creating a schema and creating a credential definition. Cool. Thank you. And then we will return to our Alice agent window here, and we are going to get acquainted with a new tab on the left menu here so we will go to potential issuance. This is another one of those windows where if you hadn't already this would have triggered a transaction author agreement pop up to make sure that you've accepted the TA before proceeding to engage in operations. And that would result in a transaction being written to the network. Let's see, we then have the ability from this view to create schemas and credential definitions. So starting off, we personally the schemas are just described. We create schema with a any name that we choose for a schema let's go for an example today as an employment schema. And then we can click on the attribute button to add additional attributes to the schema. So from here, this is where we define what attributes we expect to find within credentials issued from the schema with the schema. So let's say that we want to include first name. Want to include last name. Let's go with date. Fire chair. And we'll go with. Let's say that you need a clearance level in order to access some building on this organization's campus. So we'll have a clearance level as well included in this employment potential. Cool. So with these values, we'll have four separate attributes. These names are significant in the context of the non creds credentialing ecosystem. These are the values that the verifier will request from a holder in order for them to prove these attributes. So it's important that we do our best to reuse schemas so we don't have, you know, a million different iterations of first name, first name, maybe a value first that represents the same thing. And kind of localizing around schemas that fulfill the use case that we need and then allowing reputation to be developed around these schemas helps us to to know what attributes to request as a verifier. It's also possible, though not presented within this UI to specify multiple name attributes that we're expecting. So if it was, if we did know of several credential values that had multiple acceptable attributes for the same information essentially we could say that this could be represented as first name without the underscore first underscore name or given name or whatever it is. And then we can further restrict those values based on specifying which credential definition ID we expect these to be from. This is less of a problem in an ecosystem where you are both an issuer and a verifier you know what attributes you issued so you know what attributes to ask for when you're verifying. But again this also kind of gets into the concept of having an established governance framework you know what attributes are out there for your use case and and have made use of existing schemas on the network to help you accomplish your goals. Okay, that's enough rambling about the schema I will go ahead and hit confirm which will cause a schema to be created schema transaction be created and then submitted to the ledger. And then we see a record of that schema that we've created available to us here. So now that we have a schema, we can now create a credential definition. So, if we click on the create button on the credential definition section here, we can select an employment 1.0. That's the schema that we just created. We have the option to do revocable credentials. I won't click through those this time around. Again, revocation is a whole other discussion that we could have. So, I'll reserve that for future opportunities to meet and discuss these this functionality of a non transfer education. So we'll keep it simple for today. And then from here, all I have to do is hit confirm after specifying schema. This one takes a significantly longer time than the schema in order to write and there's a couple of reasons for that. As Shar mentioned there's an individual key pair for each attribute within the credential being created. And in the CL signature scheme, these are RSA keys, which for those who are not cursed with this knowledge, RSA keys are prime numbers, large prime number values that just take more effort to generate than elliptic curve cryptography does. So those are our large random numbers generated representing our RSA key pairs for each of those attributes within the credential. I should clarify, I've gotten into the weeds so I apologize, but I should clarify that the signatures used for these credentials when they are actually issued are not RSA signatures. These are the CL signatures that are critical for the abilities to do selective disclosure and, you know, a private holder bindings with anonymous credentials. So, we use RSA key values, RSA like keys, but using a different signature scheme. Okay, so we've walked through creating our own schema. I will briefly take a look at what it looks like to retrieve a schema from a network as well. So we're using the indiesio testnet. So I'm going to click on that click on the domain ledger sub ledger, and then I'm going to filter by schema. And I can see the schema side schema I just created somebody else must have created one. I'm going to come and grab a random schema trend scheme ID here. So I just selected one. No idea who's written this. But then if I scroll down there's a transaction ID section this might be somewhere else in the, in the text I just, I know that I can find it there again could have been up here somewhere. And it was it was right there. So we select transaction ID. And then from the toolbox we can put that into this text box here that is on the same line as the schemas. And then I can hit retrieve. And similar to the ones that we created ourselves we see a schema up here inside of our wallet. So this is just local representations that are useful for being able to quickly recall the attributes associated with the schema when we're doing the actual credential issuance making sure that we populated all those attributes and such. But yeah, this is a local representation of the value that was published to the ledger by some other party. So in that schema after retrieving it I can similarly create a credential definition from that value, just by selecting it, hitting confirm. Again, takes a second as it creates those keys. Sometimes, if it doesn't show up even after a sufficiently long time, you can hit that refresh button and most of the time it'll show up in that list. And if not then there might be some ledger permissioning issues you might have forgotten to write your did to the network for instance. So that's an important step to remember. All right, so we've created a credentialed if a schema and a credit for both a value that I've written and a schema that I've retrieved. Let's do one more quick schema just for the sake of demonstration and for repeating these steps for anybody that's following along at home. So let's do a bit of a joke one here this time around. So let's say table tennis tournament. When. Sure. Version 1.0. Let's say we could include first name last name. That we could pull from another credential help by the same holder, and we can get into that a moment. So I think all I'm going to include on this one is, I don't know, division, division one, whatever that might be. And I'll make this a very simple schema with just the one attribute. Then I'll create a credit from that schema. Should show up here any second. Okay, had to hit refresh that time. That's cool. Okay, and now we have a credit that available for that table turn is table tennis tournament when inadvertently made a tongue twister there. Okay, so we are now ready to move on to our next steps. So I'll pass it back to shark for walking us through those. Thank you. If I wanted table tennis tournament, I would certainly want to verify what credential to be able to prove it. Okay, now we are ready to actually issue a credential. So to orient ourselves back in the diagram that we're building up. Now that we've written a did schema and credential definition to the ledger we've done everything we need to do with respect to the ledger. Now we can actually issue the variable credential to the holder. So just to review what a credential definition or excuse me verifiable credential is it is a signed statement with attributes about a subject it contains metadata about the credential attributes about the subject and proof the signature from the issuer. So we will now look at the message flow that occurs between the issuer and the holder in order to issue a credential. A quick note that the language here is a bit awkward and not how people would naturally talk about this interaction, but this is the language used in the messages that are sent back and forth so we were wanting to show the developer language around this flow, like Daniel mentioned it, we want to leave the guts exposed a bit to demonstrate what's going on. So this interaction can be initiated by either the holder or the, the issuer if the holder is initiating, they can propose a credential, the issuer in response would tend to an offer credential message back. Alternatively, the offer credential message could be the start of the interaction either way the flow is the same regardless of who initiated. And this is actually the offer credential message that we looked at way way back when we were first talking about did come. So from there, the holder then sends a request credential message, then the issuer sends an issue credential message. And once the holder receives that they send an acknowledgement once they send that their side of the interaction is complete. And once the issuer receives that acknowledgement, their side of the interaction is complete. And so now we'll turn it back to Daniel for a demo of actually issuing the credentials. Alright, so a brief question from Matthias, Matthias, I don't know how to pronounce your name I apologize. And this might have been said in jest, but I think there's a real, a real answer to be had here. The question being, is there a step before issuing the credential that involves the table tennis tournament oracle. Usually when we issue a credential, we've kind of, especially as we go through our demonstration today we've, we've omitted a few of the key steps and the issuance of a credential that most issuing bodies would be going through in order to actually validate that the information that they're sticking into that credential is authentic and real. Before you can get a driver's license, you have to first present like your birth certificate or something. So there's a vetting process that takes place in order for the actual issue issuing organization to make sure that the credential that they're they're issuing is valid. And it's, it's up to the issuer to ensure that they've implemented a rigorous enough process to validate that information. However, it is also valid to point out that any issuer does have the ability to issue any attribute. So it's important when you are requesting verification from from your, your provers in your ecosystem that you make sure that you are validating that the credential was issued by an organization that you trust. And we can get into a little bit more detail on that as we start talking about the verification steps. Okay, cool. So we have all the necessary pieces in order to start issuing credentials from Alice to Bob, we're connected we've got that schema and credit here. So it's just a matter of pushing a few more buttons here. Before I do that, let's do a quick introduction to the my credentials tab in Bob's window here. So this is the, the receiving end of the credential issuance section from asked Bob. So Bob, when he receives that credential should see that it shows up in his list of credentials here. We also have a credential definition retrieval mechanism here. This looks similar to the schema cred or schema retrieval on the credential issue inside. This enables the holder to send a credential proposal to the issuer. So to solve the credential definitions is not necessarily a requirement. What it's doing is just getting the values associated with a credential definition into the wallet into a place where toolbox you I can say, in a credential proposal. These are the values that I would like in this credential and have it be from this credential definition. So that that was the beginning of the flow where that the holder starts at that show went through. We won't be demoing that today but it is possible for the holder to start the interaction using that proposal. Okay, so to actually issue this credential we will start from the outside. Again on the credential issuance tab, and we will click the issue button on the issue credentials list. We will select a connection. We will select the credential definition. I've got three available to me since I've written three different credential definitions. Let's start off with our employment credential. You could say that this was issued the date. Bob was hired I guess and you can say congrats in this optional comment on the issue credential message. So we could say Bob was today. I'm representing this value as a string value. This has significance when it when it comes to the usage of predicate proofs. I'll talk about that a little bit more and why this format for the string actually isn't ideal. I'm just doing it so I can easily put a value down here for the clearance for this one I will make this a little bit more prepared for doing predicate proofs. So this one will say that clearance level. There's say four different levels and each of those levels are represented with integer values starting from zero going to three. So we'll say that Bob has a clearance level of one, which means he's got basic clearance one step up from no clearance. So I'll go ahead and issue that credential I think I'm done. Yeah, hit confirm. We'll see value pop up here on Alice. We'll hit the refresh button on Bob. And we see the issued credential on either end of the exchange here. So if we inspect the credential value. This helps us see a little bit more of what actually took place between Alice and Bob, if you'd like to take a little bit more into the protocols that take place here. For instance that the final state of this record of this issued credential is credential act. This is the completing state for this credential. And represents that we received acknowledgement back from Bob at the end of the post change of the message that he has received that credential. We have a record of the credit def ID and scheme ID that was used to issue the credential. We also have attributes that were issued, and we can also dig into the raw developer values as well here. For instance, you can see like the rock credential offer message that was sent to Bob from Alice, which includes the credential preview, which is where the attributes are actually located. And then from the recipient end, we see a similar set of information available to us. So again, state credential acts, print def scheme ID, the attributes that were issued. We also see an additional value in here in the record, which is interesting. So I'll bring it up, which is the actual rock credential value this is what gets stored into the wallet so that we can later fulfill presentation requests with this credential. This credential storage is separate from the record that represents the exchange between Alice and Bob. Just as a side note. So the exchange record, the values that track what our state is in the protocol can be thrown out by either side of the exchange. It doesn't matter it's not necessary that we keep all this information around what matters is the actual rock credential value here. Which includes, you know, values and signatures which is big stuff significant to the seal signature scheme. All right, let's see. I will issue one more credential just to walk through this process one more time. So from Alice to Bob. I will now again select Bob as the connection this time let's go with a table tournaments teach table tennis tournament win. And we'll say this was in the, I don't even know what division represents in this case I'll just come up with something. We'll say it was a division within a company. Let's go with that. I will say it was on the tech staff that the term that one was was made. We'll meet the comment and we'll go ahead and issue that credential. And then we review our values here on Bob's end we see the table tournaments teach. Can't say it. I'll skip it. We received the credential. So now, I think we are good to talk about our next steps as a brief reminder of this information will be in the slides that will be shared out so if you want to step through again later. There will be screenshots in the slides. If there are any steps you want to repeat so with that I'll hand it back to shark. Great. Thank you Daniel. So, now we are moving on to verify a credential. So, whereas the issuer had to decide which credentials issue to whom the verifier must decide who and what to trust. So to decide this they interact with the holder they read from the ledger they perform cryptographic computations to verify that the attributes listed were issued by the specific issuer. An example of a verifier might be the airport or the bank or a store anywhere where you need to prove information about yourself. What the verify does not do is interact with the issuer, the verify is only able to read the information written on the ledger by the issuer so there's that indirect interaction that provides that trust. In the sake of simplicity, we're having Alice play the role of both the issuer and the verifier in this demo roles are not set permanently between parties, the role is unique to the interaction so that works just fine. It's also entirely possible that the issuer is the verifier if an employer issues an employee credential they would probably want to be the verifier as well in some cases, but if they are different. So the verifier decides that they, they do not interact and cannot gather or exchange information without about the holder without the holders consent or knowledge. So back to this diagram that we've seen before, we're now moving to the right side on the verification side. The credential must be created, and then the verifier reads the did schema and credential definition from the ledger and verifies the attestations. And now we have built this diagram to completion. So here again we will run through the flow of messages necessary to verify a credential. And again, either side can can initiate this, this interaction with a proposed credential message from the holder request presentation request message from the verifier. The holder can choose to approve that request and send a presentation in response. Then the verifier will verify that presentation and send an acknowledgement. And then the holder will receive that and then both sides of the interaction are complete. My turn it back to Daniel for the demo I will talk about the different types of presentations. So the simplest example is a full disclosure presentation in which all attributes are revealed from one verifiable credential. So in the analog world the equivalent is where you take out your passport at the airport to get through security you're showing all of your information on that passport. Another type of presentation is the multiple credentials presentation where you reveal all the credentials from two or more credentials. Another example in the analog world when I got a library card after moving I had to present a photo ID and a piece of mail proving my address and revealed all of those attributes. Moving along to more complex types of presentations these show off the important privacy features of a non creds. There aren't really great examples of these in the analog world because they are unique to this digital interaction. So selective disclosure is when only certain required attributes are revealed from one or more credentials. So instead of revealing the entire contents of a credential you reveal only a subset of the attributes. The predicate proof presentation is when only a true false value is revealed. A common example is age, the verifier doesn't need to know your date of birth which is a very important identifying piece of information. If all they need to know is if you are younger or older than whatever threshold they need to make a decision if you can enter the venue or get a senior discount or whatever the case may be. So this is a great way to preserve privacy while verifying necessary information. And lastly revocation is the ability to revoke a credential. For example, if a credential expires or need to be updated or the holder is no longer eligible for whatever reason. And so with that, I will pass it back to Daniel for a live demo of the different types of presentations. To get into it acknowledge that we are actually at time so I apologize we've run a little bit behind schedule to this point. If you have to drop totally understand. We will stick around and continue to finish the rest of the presentation perhaps a little bit more accelerated so those who would like to stick around can do so with a little bit less burden. These again will be in the slides so if you've got a drop and you don't want to watch the video later the slides will have a walkthrough of these last few steps as well. So proceed here slightly quicker than I have previously. So from the Allison we click on the verification tab. And from here we can request a presentation from one of our connections. We will select Bob. We will select Bob. That's a good expertise here. We've got somebody off mute. There we go. All right, let's see. We can assign a name to a presentation request. Let's go with just a simple. Let's do a multiple disclosure so let's do a proof of table tennis one table tennis tournament. So we will request the attributes that we issued, I think we gave a division. But we might also want to verify that the person who won the table tennis table tennis tournament is actually an employee of the organization just so I can get a prize or something from the company. So we can ask for first name and say last name. And we can ask for data pirate as well. So we can apply restrictions to these these values that we are requesting in this presentation request these enable us to say, I want this attribute from the specific individual. Or rather from the specific issuer, and then be able to cryptographically and require that as part of the verification process. So this is the scenario in which we actually need to determine who we trust. So the ice cream truck that I frequent down the street might have given me a rewards card that has my name on it. But as an organization we're unlikely to trust the verification processes that the ice cream truck went through in order to issue that credential to me. So we want to make sure that we are using issuers trusted issuers that are appropriate for our use case for our governance framework and so on. So for this one, I'm going to require that it came from the table tennis tournament credential and from this one from the employment. Same with this one. Same with that one. And hit confirm. Okay, and we saw that it went from unverified to verified as the exchange took place between Alice and Bob in the background here. And then examine these values in more detail. The revealed attributes are presented to the controller and to the toolbox in this case, it is using these revealed values that we would then take our next step in our business logic operation whatever is going on. Clearly because of values verified does not necessarily mean that we're ready to act on it, we might want to make sure that in in our business logic that the division was one of a set, or something like that. So this isn't necessarily the end. There's there's more to do after that value has been passed up to your controller in order to trigger the next step in some exchange. I will now do a quick predicate proof just to demonstrate that real fast, and this will be say Bob is attempting to access a building on his employer's campus. We will ask for proof of clearance, and we will request a predicate on clearance, and we say that it must be at least a basic clearance level so we'll put in the value that represents basic clearance level, specify the restriction, hit confirm. And we can see that that presentation has verified successfully. We can view these presentations on Bob's side as well I won't get into the details on that this verified true statement that the predicate proof was met. And that representation of that predicate can be seen there. So that brings us to the end of our hands on demonstration thank you everybody for your participation. I'll hand it back to shark for some concluding thoughts. Thank you Daniel. So to look briefly back at what we've covered we've discussed agents areas clouded in Python and the areas toolbox did come messaging, we've demonstrated how to connect agents and issue and verify credentials between them. We also way back covered an intro to decentralized identity and the hyper ledger indie project. This is the end of the areas section. Thank you all for listening and engaging for following along and asking questions. I will now pass it over to Sam for a final word on getting involved. I just want to encourage everyone, you're hearing from me again about community involvement, but we'd love to love you to be involved. If you're interested, come and learn. If you're able to contribute that's wonderful. Ask questions, reach out if we to any of us if we can help and we'd be happy to do so. You've got some, you know, we've got links on the channel and the discord service a great place to get started and join our meetings. Thank you for coming today. I hope you learned. I hope this was interesting. And we hope to be able to help you as well in the future. Thanks so much. Thanks everyone for joining and thanks to everyone for in DCF running this I think this went great. Bye all