 Alright, awesome. Well, we are really excited to be doing this workshop today. Thank you everyone who's able to join us this workshop this title build your identity solution using hyperledger areas. We're excited to have a great group of moderators and instructors today. We are excited to talk about the hyperledger projects that have made production level decentralized identity solutions possible in various use cases across the globe. For introduction my name is James Schulte, and I do business development here in DCO, and I personally help companies from around the world and from various industries learn about hyperledger areas and indie, and how they can be used to make complex business and social challenges using using decentralized identity, which is often oftentimes referred to a self sovereign identity or SSI. I'm sure you've heard a mix of those three terms there's, there's a couple other terms for as well and today you might hear a mix as well. But this technology is is an open source protocol that enables these technologies to work. The work that the open source community is doing with these projects is playing an important role as more and more enterprises and governments move towards decentralized identity models. And today we're lucky to be joined by some of the world's leading digital identity engineers who specialize in these protocols, and you'll get a firsthand walkthrough on how you can use these tools to build your own decentralized identity solutions. And as mentioned a few minutes ago, in terms of technical support and questions you have throughout the workshop. You can drop your questions in the areas channel on the hyperledger chat. I think it was previously called the hyperledger rocket chat. Drop your questions there. We have multiple in DCO team members who are watching the channels there and we'll get to you as soon as possible. For the reason we're not able to answer your questions then the questions will will remain there on the channel and we'll come back to as soon as possible to, to help you as you, as you work on your own projects. As I said prior to today's workshop you should have received a set of prerequisites or instructions to to get your machine setup to participate in the hands on portions of the workshop. So if you've not done this, don't, don't worry. We're happy to have you here and feel free to follow along and watch, watch as our instructors do the demonstrations. And as mentioned today's recorded session is going to be available on hyperledger foundations website, along with the instructions and links to all of today's materials. So feel free to go back and access that at your leisure. So today is a valuable resource to your team as, as you work on your digital identity solutions. And with that, I will hand off to Scott Harris who will be leading the first segment of today's workshop. Thank you James. Great preamble there to get a setup for really productive and good day of education and discussion. The portion of the program here is called what is decentralized identity it's sort of a, why are we here doing this in the very big picture sense. For those of you that are maybe newer to the space this should provide the, the sort of philosophical groundwork that says why does this technology even exist why was it created and what are the values that are going to be derived from it. And in the space or circling around it a little bit longer, it may be a bit of a review but I think it's good to have a leveling of the playing field here. Before Len and Daniel star and Sam get into the technical content so it will give some context to everything that they're saying and talking about today. My name is Scott Harris, I'm the VP of business operations in DCO. I wear a lot of hats. My, as you can see by my profile picture. I'm probably the kind of guy that likes hats. One of my roles in addition to helping DCO work effectively internally, both between our work groups and with our clients is to do some problem solving in the use case side of things so I work with our clients to help articulate the application technology and help them derive some business value. Very quickly out of this and we've been quite quite happy with with how that turned out so without any further introduction. Here's some background. Why are we here, the internet and identity. This is a very old cartoon, I think from the New Yorker, a long time ago. And it's still applicable today. And the truth is the internet was created without a way of identifying who's on the other side of that computer or that device. And in some cases in many cases anonymity is desirable. It does help protect our privacy. But I'm sure we're all aware of the ways that anonymity can be quite dangerous, and also quite difficult for businesses to execute some of their functionality if they don't know who's on the other end of the line or who's communicating with them. And we will talk about that on the next slide. The main reason we have identity systems at all usernames passwords, things like that is to establish some sort of trust. Trust that we establish, however, only goes as far as the work that's put in at the entry point to that trusted ecosystem so if we have a LinkedIn account and you signed up with an email. Well, anybody can sign up with an email and pretend to be anybody else they want. And you say okay I'm using my email to sign up for LinkedIn for my Gmail account. The question I would ask is what did you do to get that Gmail address and the answer of course is absolutely nothing. If you're like me you probably have 10 to 15 email addresses floating around for various purposes. And again, that can be great to help preserve some of your privacy and protect some of your information. But identifying who you are some attribute about yourself. It's very untrustworthy and not useful. So we need something better to talk a bit about our terminology. We've already said the word identity, probably 20 to 30 times today. So I think it's worth looking at what we mean by the word identity. It comes from a lot of different sources. This is sort of how the world looks today that your identity comes from a, a pie, if you will, where a portion of it is generated and often held by these identities that give you some, some of this identity. What we really mean by the word identity goes beyond what your common definition might be, which would be your name your address for date of birth. Eye color height weight those sorts of things to broaden out to to be defined as a set of data that refers to a particular data subject so a little bit of that is GDPR language, the data subject but or the rest of our group will talk today quite a bit about an identity holder as a data subject and and refer to that as a person, but it doesn't have to be a person when you think about it devices and IOT things have identities that are issued to them. And the center prizes have identities of their own so everything that has any digital footprint has an identity in that regard and it doesn't necessarily need to be the traditional definition of identity. The problem with identity right now is that it's largely centralized and what we mean by centralized is that data is controlled and shared and handled primarily by third parties. The digital identity is very difficult to control. Of course, we know everywhere you go. digitally there's data that follows you along with you generate by virtue of a relationship, and that data is not within your control. And we can't completely solve for that but we can solve for some of the problems that are derived from that and those problems are consent and compliance and transparency. So when we look at the centralized model it kind of looks like this where let's say a government might give you a traditional kind of ID such as a driver's license and and often those the digital versions of those sorts of things are shared between entities and outside of your control. Decentralization looks a bit more like this, and it's an oversimplification really of what we mean by this decentralization but I think that's a good thing. Because a nice simple picture helps us wrap our heads around what we're after the decentralization breaks apart those relationships, but puts the identity holder in this case a person at the center of things and there's a bit of contradictory language where we think of a person at the center of their identity but call it decentralization so of course we mean decentralization and the with the technical definition. But when we break apart those relationships and keep those entities from sharing data behind our back without our consent. It may or may not be a good thing. In some cases it may we may give up some efficiencies in order to maintain privacy and of course privacy protection and an efficiency or two of the things we're after. But this picture here with the pie pieces broken apart. It's really just a digital model of the analog world where an entity may give you some piece of your identity whether it's a government or workplace. Imagine logging into your employer's, you know, website and your HR portal and, and downloading a tax record or a pay stub from your employer that pay stub constitutes part of your identity that was given to you by your employer. And if you go to apply for a mortgage, you would download that and you would send it out to the mortgage company so in the old world, the analog world you would probably print it out and then mail it to them and now we're trying to do it digitally so our goal here is to create a digital model of the analog world where you have some control of that. Imagine walking in with your driver's license here to buy an age restricted item. You don't hand them your driver's license and let them make a copy of it and say, hey, take it, keep it, do whatever you want with it, don't worry about it. But in the digital world that is effectively what we're doing with our with our complete footprint as we go about our interactions and, and our online relationships. So, so we want to go and try to replicate as best we can the control that you have in the analog world. So with that very brief history lesson on trust and where we've come from and where we are now and where we're headed in the analog world that I'm speaking of. And I'll throw some dates out there of about 3200 BC when the first writing was recorded that we know of to roughly 1964. We had a system where it was really easy to trust physical documents because they had some indicator visual or tactile of whether a document was authentic so we have notary stamps and raised seals and special paper and envelopes that are sealed or wax on them, registered mail. We have a lot of ways when you receive a physical document of saying hey, is this authentic. Do I know where it came from. But of course that's really inefficient, and we don't like an efficiency here in our modern digital world. With that we also had a lot of privacy and control so so not the greatest system in our hybrid world the one we're kind of living in today. We've sacrificed quite a bit of trust to gain a little bit of efficiency so now what we're doing with those physical documents is we're snapping a picture of them with our phone and now we're sending a JPEG instead of the real document or we're scanning something, faxing something. And we're gaining some efficiency. But of course what we're really sending along to that next party is just the digital representation of the document and not the document itself and it's not trustworthy. And of course as I described once it's given over our ability to control it is minimal so we sacrifice quite a bit of privacy there. I recently went through a mortgage process and had to email a bunch of things to to the mortgage lender and it darn near gave me a heart attack thinking of all the opportunities for the loss of privacy there but of course I did it because there's really no other way that's reasonably efficient. So in the decentralized world as we build out some of these tools that you're going to see and hear and learn about today. We should be able to gain back almost all of that trust and all of the efficiency and maintain our privacy. So why does that analog model work well as I said, we have ways of verifying what what attributes come to you if you're the one receiving some information. And it works because the person receiving it has visual intact our ways of determining whether that driver's license is real there's a hologram and it's fully laminated and you haven't cut into it with an exacto knife that sort of thing. And right now, the representations that we send along as representations of an identity piece, of course have those risks and costs associated with it and that's the big reason why we're here today is that the business world has recognized that there's a massive cost to fraud to to losses and hacks and things like that. Imagine, you know, you're trying to verify who someone is and I did it this morning, where let me verify your email and send you to a prompt. And of course, we all know that no one's ever given over control of their email address to somebody else. So, so it's really only as good as as the actual system that you're using. So what we're after here is trust in the data, trust that we don't have and trust comes in two forms it comes in the form of integrity, which is being able to identify if the data is real, or in other words has it arrived to you as it was created and issued by that issuer, and another way of saying it has it been altered or tampered with. So all documents have some really nice ways of identifying whether someone has put white out over the page to change their birthday. So we want to get, get to be able to verify data integrity we also want to verify authenticity and authenticity means, can I trace it back to its source, do I know where it came from. And if you have those two things you can trust the data now whether you execute a business decision based on that trust is really up to you as the person receiving it. At least you can know that it is real. It's unaltered. And if it comes from a source you can trust you might let that person who's sharing it with you. Go further along in your business processes. Behind that trust is something we refer to as governance, and that can be I can talk four hours on governance. I don't know one but we won't do that today. A couple of quick slides governance comes in two layers that's helps us trust that data at the bottom. This is a lot of what you'll talk about today is that the cryptographic trust, which begins with hyper ledger products and the VidCon protocols that help them along. And then there's the human trust which is contractual agreements and the legal framework that says hey, we're going to issue a credential and let someone take possession of it and then here's a way we can present it. In another view, I'd like to refer to it as cryptographic trust all the things we're going to talk about today and then philosophical trust which is really just a business decision. Oh, you'll have to forgive my word wrap the participants in this trusted ecosystem are are usually placed in this sort of triangular relationship where we have an issuer of a credential here on the left, who gives some data, data about this data that so their identity in some form or another to a credential holder who then takes control of that just as you would in your your analog wallet and when asked or requested for some data by a verifier shares that with consent and transparency so we have issuers, and verifiers and then we have the verifiable data registry or ledger and the networks that support that type of data exchange. No more on that for me because smarter people than I will tell you about that part of it. The thing about this from a privacy point of view remember we want to maximize trust and privacy and efficiency is that the PII type of things when we're speaking of humans as credential holders stays with the data's owner or the authorized controller that data. If you're not talking PII per se about a person if you're if you're a business enterprise and you have an identity and you're exchanging information this way, it allows you to maintain control of sensitive business information as well. The ledger that will constitute the bulk of our content today. Pardon me is just the means of verifying those two things I already described authenticity and integrity of the data so the data itself stays up here kind of above the line from the issuer to the holder and given to the verifier. The question to trust or not to trust is really not ours to make in the context of what we're talking about today the verifier who receives that data. I don't trust that it is authentic and has integrity, but whether or not they trust the issuer is up to them and a very simplistic example would be if I were to share, let's say a driver's license to buy an age restricted process product, the verifier who's selling you that can verify that yes it's has integrity I haven't I haven't changed my birthday and it came from the motor vehicle association. Okay, so now it's got integrity and authenticity, and they can do the head scratch and say, I really trust that motor vehicle association so I'll let you buy that product. However, if I was issued and held and presented as proof of age, a Duncan donuts or Starbucks driver's license the verifier can still say. Yeah, you haven't tampered with it and yes it's authentic it did come from Duncan donuts, but no we're not going to trust that is proof of your age because we don't trust them as as a source of your age. So, so to trust or not to trust is is really a business decision in the end. Some quick use case examples to help tie what I've just said to the real world. In the case of issuers, we can think of, let's say government agency or entity that wants to issue you a credential that says hey you live here within the scope of my domain here. And the first thing you might the government might ask you to do is to validate your identity either using a mobile app or by even by showing up in person and saying hey here I am and here's my driver's license. And through that, you can see where they they need that analog assurance of who you are. However, once they've validated your identity, they can take some more information from holders wallet or application and give you a government credential in this case, we'll call it a government residency credential that says, Hey, you do live here we know who you are. We've got you in our database that we already have permission to have you in from you, and we're going to give you this resident residency credential. So now you might go to a bank and go to their bank website and scan a QR code and present your government residency credential as proof of who you are and those data attributes that refer to you the data subjects and that have been presented to you by the government as an issuer in the bank can say, Okay, let's request some of that data. You share it with transparency you share it with consent, you have to take some discrete action often within the context of an application, the in your mobile device or in a cloud based interface, where you say I consent to giving you this data. The bank says hey we can verify it we can go and check that it has integrity, and we can check the source and it has it is authentically from the government, and therefore you can open your account right now in 10 seconds rather than having to come in and show your ID and show your face and all that sort of thing. Once that trusted data is received by the bank, and you've given them permission to have it and hold it and use it, they can use it to issue you a derivative credential perhaps that says, Hey, here's your bank account holders credential, it's in your device, and we're going to let you log in your account now using this so we're going to let you transfer funds using this or open a second account using this, because we know you are, and we, we have been able to delegate that trust from the its original source. So, before we get into the bulk of the content here a couple of notes on the roles and the conventions and some of the language will use the code architecture that we're going to talk deeply about a course centers around type or ledger products here or so, Indie and Aries. So you'll hear those words are repeatedly I'm sure you're familiar with those. We have some basic building blocks that go with the real world execution of this. So we'll have cloud agents. That's the, a lot of what Daniel jargon talk to you about. I'm going to talk to you about the Indie layer of it here. And we have edge agents, mobile devices, laptops, things like that and mediator agents. And if you're not familiar with all of that terminology my goal here was simply to throw those words out so that you may not be hearing from them for the very first time as you go through the more detailed presentations. I've spoken about issuers holders and verifiers, and those names will be used repeatedly. And so an issuer is a creator of a verifiable credential given to a credential holder, the data subject or data owner, and then shared with someone who wants that data verifier so issuers holders and verifiers are throughout our day today. And I will fix that real quick for you. We also use some real world names Bob and Alice that's sort of a, it's not new to the identity community are unique to it. Bob and Alice are thrown around a lot of other use case examples but you will see Bob and Alice repeatedly here, sometimes as issuers and sometimes as verifiers. The use of roles often roles overlap. So an issuer may actually give a credential to a credential holder, and then Bob acts as a verifier of his own credential. Likewise, and I apologize I didn't do a good side review there. Alice may also act as a verifier of her own credential, or take Bob's credential and then then reissue from it. And if you note on value, when we achieve the decentralized model that you saw earlier trust efficiency and privacy, we are able to protect the data of individuals and be very compliant with things like GDPR, HIPAA, and those types of regulations. We use zero knowledge methods and help businesses ensure compliance in addition to making it efficient for all of their customers and end users. And so with that, we've got a very broad basis in why we're here. And up next for you is our presentation on Hyperledge for Indie Networks by Lynn Bendixon. And I will let Lynn do his own introduction. And if you're ready, I'm going to stop my screen share and turn it over to Lynn. Excellent. Thanks, Scott, just a few housekeeping items here before I get started. The, there will be a five minute break after I'm finished with my 20 minute or so session here, or portion of our training today, and the, I just call your attention to the handout here I'll put a link again in the chat because we'll be doing this during mine and copying stuff from here or you can go to some of the links that I'll be referring to. So if everyone can have their screen ready to for the Indie CLI hands on part of mine. Later in my presentation here, that'd be great time to get ready for the hands on part. Into into my slides, I'm going to have my slides looking like this a little bit funny for you, because I need to go back and forth and click on links and stuff for minus this part of the hands on stuff so it looks a little weird compared to the how nice it looked for for Scott's part of it, but the, I'm going to be talking about some of the tools. Scott gave you some of the background information. I'll be going a little deeper and telling you a little bit more about some of the tools and and I'm doing this because we want to share a little bit about some of the background for terminology and some of the other things so before we get into the Aries part of it just to give you an idea about how Indie works so a little bit about me before we dig too deep into tools here. I'm the director of network operations at Indie CEO. This is my handle for every channel that I then I know about I think so you can get a hold of me there and ask me direct questions. I should say pending here I think because it's not quite 100% official, it's almost official and be the co-chair in about two weeks of the TIP utility foundry working group along with two other excellent individuals utility foundry means public utilities so trust over IP wants to help people build networks through this working group and and I built a lot of networks and that's why I'm talking about some of these tools is because I built networks and built network tools. And, you know, that's what I've been doing for several years now for both the sovereign foundation, helping to build their networks and then also now for Indie CEO, I've built all of our networks for that so that's my background and and why, you know, I'm the expert they do about this stuff. So let's start with the developer tool. This one, I didn't actually help right this one is written by Patrick Stoss did a great job to where I just had to copy the code and change a few things to make it work for our Indie CEO networks, as you can see here. So you can go to your link like I just clicked on it and browse around in here if you want to talk just a little bit about this and give you an intro to what all this means here. Essentially, this displays three of the sub ledgers that are on the network right so the Indie CEO test and other Indian hyper ledger Indie networks have multiple sub ledgers on them that constitute what we call the ledger. So there's a little bit more terminology for you there. Essentially, the domain ledger sub ledger is the main part where you get to add in did and so it means is it did and then schemas to help build your credential definitions, which is this one, the credential definition so people are out here testing on this thing 46,800 rights to our domain sub ledger on the test net. It's getting a lot of use. So, writing things out here is what an issuer does talked about issuers to be able to issue credentials, you have to have these things on the ledger anyway, then the this pool ledger has the nodes of the ledger in it and then you can think ledger sub ledger has the rules about how to run the ledger following the network governance. And there's other things out here will let you go and click on them and see it shows more details about all this stuff but he's going to give you a high level introduction later on when you add your an endorser to the network as part of our hands on stuff, then you can come look out here and see your endorser right here on the NDC of test net when we get to that point. So that's part of why I show that one. I'm going to get too much further. I just mentioned the word endorser I should talk a little bit more about roles. Scott talked about roles outside of the ledger issuer holder verifier for the network itself, the names of the roles on the network itself are the following. These ones here that trustee is the one who writes those administrative transactions to the ledger so that for the pool sub ledger and the config sub ledger a lot of the administrative transactions will go into there and they have a sort of, you just saw a note that says, can you please do full screen and apology that can make this a little bit bigger to make it easier to see, but we have some, some hands on material behind there so that we need to have, have it look like this again apologies. Yeah, the trustee writes and governs the network through administrative transactions to help make things work. And then endorser, like I said, that's the actually the only role on our ledger that's able to write the credential transactions a trustee can even write the schemas and the credential definitions and stuff that are needed to issue credentials. I'll get more details about what all that means and why all that is but this role here is the one that can do that. Either writes them himself, making him the issuer, or an author can build transactions, and then get them endorsed by the endorser. So an author can also be an issuer, or eventually could be an issuer, but they have to have their transactions endorsed by an endorser. And the reason for that is because on the on a main network. Most people don't run main networks for free that's the way they charge people to use the network is, is to have these endorsed transactions and so a company might be an endorser where they have clients that are authors that where they endorse their transactions for them. Hopefully that gets you at least a start that's a lot more information available online and you can ask questions in the chat and we'll get you more information if you need it for that stuff. But the other high level thing we need to talk about is the transaction author agreement there. That is something that, whether you're an endorser or an author you have to sign this if you are going to write transactions to the ledger. It means you agree that you're not going to write anything illegal, or any, you're not going to write any PII or anything else that's against the governance. And it's kind of a long thing to read but if you read through you can see that it's, it makes, it makes sense. It's not super restrictive, but we don't want anybody's pictures out there we don't want any person identify valuable information on the ledger. So, and that gets us to the next tool that I want to talk about and that is the self serve tool. We're almost to the Indie CLI so if you haven't got your Indie CLI ready to go yet your, your screen where you can start typing things in now's the time to do it, but we're going to tell you a little bit about self serve here first. I'm going to pop that guy up here. This is the self serve web page for Indie CO software and has one as well looks similar. Essentially you type in the network that you want to use. And today we'll be using the test net. And then copy in the did and the ver key that you get through a method we'll show you in a little bit, and then click submit. And that's all there is to it. And this helped me get a did down here can help you to get a did on. You can see three operating systems Linux windows and Mac so you can get in and follow the instructions to get an Indie CLI setup for being able to get a did in a in that way there's a multiple ways to get kids and we'll show you that way today and then we'll also show you how to do it through the areas toolbox. Okay, we're ready. You guys ready. Here we go. Indie CLI we're going to open up our, our handout, I think we called it, and our command line here, and we're going to have you guys follow along as we set up our command line to be able to do things in it. Indie CLI is a is kind of an more of an administrative tool it's very manual and and hands on, but it helps to show you the basics of of how network works and working in that will give you some of the background underlying things to help you know what's going on so what else was I going to say about this oh yeah so it's a place where you can have your own wallets and your own other and then start managing your date your own dates and your own wallets and that kind of thing so we just jump right in so here's what I'm going to do I'm going to copy and paste this thing into over here. Oh, looks like I got that for already set up. You notice I have an extra G on mine. You guys shouldn't put the extra G on. I'm running this from my own environment so you guys can see what it will look like. From somebody who's already done this before and has a whole bunch of things already in place right so I already have a whole bunch of things already on your screen is the is for session is uses transaction author agreement. Again if you're not able to follow along or if you have troubles here you can ask in the chat on Aries. Sorry, I had the hyper ledger chat channel and Aries channel and I'll follow along and watch what I'm doing and try it again later all this stuff is not restricted to just during the session today so we have a I already have a test net so I'm going to need to create a test net with an extra T on it but the rest of you shouldn't have any pools on your system. Actually this pool isn't creating a new network pool. This is as pool kind of means a pool of nodes that make up a network, but this is a pool handle that points to an existing pool so we're making a pool handle on our system and now it's. Now we have the test net pool handle there. And we'll just go ahead and connect to the TCO test net. One more quick comment. Oops. I forgot to add the extra T on the end of that one. I have another here so I'm going to try that again. Get to the right test net. On the previous command where it says gen TXN file equals pool transactions test net Genesis. And if that file doesn't exist in the directory where you run the where you ran indie from you'll need to put in a full path for that so there's just a little hint for you. Okay, so once you say you're going to connect to the DC or test net, which in my case is called test net T you can call it whatever you want of course. And then you have to read the transaction author agreement. We talked a little bit about that author agreement. And first of all you have to say you're going to read it. And then, as mentioned earlier this just talks about what you can can't write to the network, and we'll accept that agreement. I agree to not do anything illegal or immoral or any PI or whatever on the network. So, and now we're connected to a pool. And that pool is linking us to the indice of test net. So we're going to keep going here. Hopefully the next command in here. The next thing to do is create a wallet, and this wallet will hold our dids, or as the network calls them names. And the first thing we'll do is type in this password doesn't have to be a really long one. Someone has to break into your workstation, or your system wherever this is before they can break into your wallet so but it does need to have a password in case someone does break into your wallet. And then the next or break into your system. The next thing we're going to do is we're going to open that wallet so using pretty much the same can put in that same password that we just did. Mess up your password or can't get in and feel free to make another wallet of a different name to continue on with our example today. A wallet contains dids and names but it also contains the private parts of your keys and your other. You're needed to who you say you are ledger or on the network, only the public parts are stored on the network all the private things that are needed to say in this wallet. So, there's a little right there. Where did we get to. Okay, then we're going to create a did. Now, I kind of did a little bit of a disservice here but I'll explain what I did here what what I did is I didn't make it. So this has a seed associated with it. If you add the word seed to the end of this, you need to provide a seed for your wallet seed or seed equals and then I'm sorry for your did that we're creating this new did or NIN that we're creating and what that does is that makes it so you can use this did or seed again at a later time or in a different wallet and then you store that seed in a special place and only you has access to it and then if you lose your wallet or if you need to put that same did or seed on a different machine in a different wallet, you can do that and it'll have the same keys. And there's a lot of good reasons for doing it that way but for our example today, we're just going to make one with an unknown seed that's what this does and one is an example. So, when you do it for real, that's what I'm saying to learn a little bit more about seeds and creating one. And that's important for for the real thing when you when you really become an issuer you want to keep track of that seed, and that's a super secret thing that you want to place in the first letter of it and hit tab. And that's why he came to this training so you know that you didn't have to type that thing in by hand or cut and paste it that you could just hit tab complete and it would work. So now you have an active did and you have a wallet and if you look at the command line now that you have you'll see that you did is there and your wallet is there and your pool has been adding on as we went along here. So, that sets us up for being ready to do things on the network. So far, everything we've done has been local though. We set up a pool handle locally connected to the network. Right. So there's a connection that's there now, but then the wallet and the did all that stuff is just sitting here local. And nothing is on the network yet. So, just to just to keep you in the, so we're just going to type help by itself now and show you what that does. And that shows us that there are essentially the thing I want to point out here is that there are five major command groups. So that we can ignore the payment address one here. That's not. That's a capability of the network that's not been put into place. It's not something that we have set up or using right now, essentially, the ability to make on ledger payments. I don't want to go over that one, but we have talked a little bit about wall, but those are all about already from our examples. We haven't done much with ledger ledger is the link to comes in this ledger command here so we're probably won't go through a while we running out of time for my portion here, but you can show you that you get a list of all the pools when you do a pool list on yours. You'll just see the the one pool that you've created. So you see all of my wallets there and then I created a new wall so it should be only one did in in my wallet just like there's only one wallets. And again that when you hear what that did is so when you're creating a did, and you want to keep track of a whole bunch of them. It's helpful to add the metadata on when you're creating it so that you can keep track of which one switch is just a little hint as to what it was doing there. Anyway, so let's jump into the ledger command a little bit if you hit ledger and then you don't type in tab tab you just type the tab key twice. It does a tab complete and shows you all the different commands that come after the ledgers all the sub commands and if you type in ledger help it shows a whole bunch of stuff it's kind of hard to read too much stuff maybe. So let's just get help for a single command right just pick the name command like you did in the example there ledger name help you can copy that command in now. And it shows help for the ledger name command and all the different things you can to add a did to the ledger or manage that did this on the ledger. And so, once you've added your name to the ledger, you can do a ledger get them like the next one here. So, did equals. I don't think this one tab complete so I got a copy of mine earlier and paste that in there. So ledger get nim. What do you guys think this command will do. Show us our name on the ledger. No, because we haven't added it to it. So since we haven't added our name to ledger or haven't had an endorser at our name to the ledger for us yet. Our, our name with no print. So, just a real quick example. Copy loops, the show you that you can what the output looks like when you have it did it's actually on there. And what this is is this is my ID. So I'm a trustee on the test net and because I manage it and help to manage it and they and that's my public did for myself as a trustee on the dco test net. So, I think we're ready for our break during the break I'm happy to answer some questions and stuff but that's, that's a really good question. So we're going to do our five minute break and if everyone can come back it looks like we can just go at the top of the hour, give you a little extra time and go to the top of an hour on top of the hour for break and feel free to keep asking questions and we'll stay online during the break and and we'll see if we can help you out with any questions. Thanks everybody. Okay. We're now at the top of the hour. Let's go ahead and kick into the next section here. So let's see. So I'll first start off by thanking everybody that's presented so far. The intro to decentralized identity as well as the ending network tools is all building up to what we'll be doing in this section which is our introduction to hyperledger areas and some of the tools and code bases that we tend to work with here. My name is Daniel bloom. I guess I should go to my intro side. I am a software engineer at indicio. I am. I like to think of myself at least as a veteran of the community I've been here for a while started at the sovereign foundation and then later joined the at indicio. I've been working in the space since before hyperledger areas was even around back in the day when it was the indie agent working group underneath the hyperledger and new project. Today I am a committer on the areas cloud agent Python code base, get to put in my two cents on pull requests and features as they get added. I'm an active contributor there as well as well as within the plug in ecosystem. Joining me today for presenting these sites is also sure so I'll turn it over to her to introduce yourself as well. Thank you Daniel. What's up my name is char howland and I'm a software engineer at indicio tech. I am an active developer on the areas cloud agent Python ecosystem working on developing plugins and other purpose built agents and protocols and I'm very excited to be here today and talk about hyperledger areas. So I will hand it back to you Daniel. So today, as we go through this introduction to hyperledger areas we will be focusing on these areas. So we'll start off with a general. What are agents, then describe in a little bit more detail some of the tools will be using specifically area cloud agent Python and the areas toolbox today and then give you some of the conceptual background while discussing protocols and messaging that kind of leave the foundation for understanding what's going on in our hands on portion, during which we'll start up some agents form connections issue credentials and verify those credentials. We have a handout accompanying these slides. This is maybe the fifth time we brought it up so hopefully you've got it up by now but you can scan that QR code or enter that address and also put it into the zoom chat here. I will do so again just to be certain everybody gets access to it. On that handout slide you will find helpful resources down at the bottom but also the commands that will be running once we get to the point of starting up those agents. So first with the conceptual stuff. So first what is an identity agent to give an informal definition my informal definition and identity agent is the piece of software that we use as humans to interact in the self sovereign identity or decentralized identity world. This is the way where the native language of decentralized identity is cryptography more or less. It's really difficult for us as humans to do all the math involved. So we rely on software on these agents to help us do that to give a more formal definition identity agents act on behalf of a single identity owner. Previously this could be an individual could also be a business university government, as well as Internet of Things devices, anything that it makes sense to attribute an identity to can be represented through an identity agent. These agents also hold and use the cryptographic keys to securely perform their responsibilities, of course as we just discussed. And these agents interact with other agents through did communication protocols, which we will talk about at length in just a moment. So agents are often split up into a few different categories, or at least they're used to help us talk about different types of agents. And a common one that you'll hear thrown around I think is is cloud agent. And as opposed to having any strong implications about the trust of that agent or whether it's using a particular transport, or the ownership of that agent. Cloud agent really is a description of the location of the agent more than anything else. It is an agent in the cloud. On the flip side of that we have edge agents, which are agents located at the edge with the similar, you know, same stipulation that that doesn't necessarily mean that we should or should not trust it or anything of that nature. There are of course, other types of agents. Here are mobile workstation server embedded browser blockchain anywhere where it makes sense for an agent and an identity identity to be represented. There's likely a specific enough category that we can use to describe it. So in addition to these descriptions of location. We also have varying degrees of complexity within the agent ecosystem. So starting from the most basic a static agent this is an agent that has only a single preconfigured endpoint and set of keys for a single connection. And it does not have the ability to update that connection. This would be useful especially in, in my opinion at least in bridging between legacy systems where rest is the predominant force into a agent and did come speaking system using this static agent as the thing that's get gets triggered on a web hook or a cron job or a stressful API call layering complexity from there, a thin agent might be described as an agent that is able to have a single updateable connection. This might be formed on demand in the cloud. You might think, you know, Amazon lambdas, for instance, however similar to a static agent, which is typically very purpose driven very use case driven. So if an agent retains this narrow focus on its job, it has its one job and it does it well. And then, going further from there we have thick agents. And this is an agent that might have multiple updateable connections, while still having a very specific purpose. The one that comes to mind most frequently for me when I talk about thick agents is a mediator agent. For those who are unfamiliar mediator agent is an agent designed to just hold on to messages for agents that are currently unavailable to receive them in real time. You might think that almost as a email inbox. So a mediator agent can service many agents, but it's one purposes to mediate those messages as opposed to, you know, go on to do further interaction beyond mediation. So it would be described as a big agent. And then finally, the most complex, you might term as a rich agent or a feature rich agent. These are usually the agents that we as individuals will be using to represent ourselves in our day to day interactions where we're exchanging credentials, forming new connections with people we meet at conferences. Whatever use case that we have for agents. Even within the rich agent category I would say that there is still a spectrum more or less a functionality where some agents might support one credentialing system and or some agents might not support credentialing systems at all and our purpose built for, you know, connections and interaction over messages as opposed to just credentials. So ultimately, these categories and this gradient of complexity that I've just discussed is kind of a loosely defined thing. There's no hard and fast rule for exactly what an agent is. And that is because these categories are just here to help us succinctly describe our interoperability goals. A rich agent might be capable of more than what we might typically think of a cloud agent as being as disqualified as being a cloud agent just helps us to understand generally this is what we intended for, which brings us to the agents that we'll be working with today to describe a little bit more we've got the areas cloud agent python, or occupy for short as you'll probably hear me say the rest of this presentation areas cloud agent python is just a little bit of a mouthful, and occupies a foundation for building decentralized identity applications and services running in non mobile environments. A rich toolbox on the other hand, is a tool for interacting with areas agents and we'll explore a little bit more of its capabilities in just a moment. But to summarize, these are both open source building blocks that you can use for your own custom software solution to issue hold and verify credentials, as well as anything else you might dream up to use within a did come space. So acropi is a server, or is rather server oriented. So I saw a question in the chat about whether acropi could be used in mobile environments. You might be able to collude it and make it work somehow but it's not really intended for that so you'll you'll experience some difficulties there I imagine, especially when it comes to receiving messages, where in a mobile environment you need to actively pull from outside of the system due to the intermittent connectivity of mobile devices. Same story for browsers, you might be able to, you know, push that square peg in the round hole if you push hard enough, but it's just not really designed for that. The benefit of that though is that acropi is well suited for horizontal scalable horizontal scaling environments such as Kubernetes or open shift. So acropi, excuse me, connect as an issuer verifier and holder of a non creds, which is what we'll be demonstrating today. But it also supports W3C credentials in JSON-LD format using ED2519 or BBS signatures. We will not be exploring that in detail today, but I have linked in the handout, more reading if you'd like to read in more depth about where support is there and what is what is possible with W3C credentials versus a non creds credentials. I will call out that support for W3C credentials is recent. I won't say it's in development still because it is fully functional and available today. But support is ongoing and as time goes on I think we'll see a better integration of those systems. Acropi is also capable of revoking a non creds credentials. This is a whole other can of worms. It's quite a in depth topic that you can explore. I'll have some links in the handout for that if you'd like to dig in a little bit more. We will not be covering revocation today. And then finally wanted to call out that acropi supports something we've come to describe as multi-tenancy or the ability to have a single software package post multiple agents at once and a more efficient than spinning up many pods in a Kubernetes cluster or anything like that. So acropi is by a pretty good candidate for agent as a service type systems that as well as the horizontal scalability that we previously discussed. So acropi is again using the categories that we described earlier is a cloud agent and is a rich agent. It is feature rich. It is capable of many things. The tool box on the other hand is what we might describe as a being a little bit closer to the other end of the spectrum. It is itself a tool, excuse me, an agent. But it's a little special and I'll describe how in just a moment. So the toolbox is a development and testing tool of first and foremost. It's not really intended as an end user product. And the reason for that being not that we don't intend to put as much polish on the toolbox as it's possible, but rather we intentionally expose some of the aspects of the interactions that are taking place as a way of exploring the interactions and makes it a really good tool for things like trainings, debugging and testing your agents, just all around useful. Hence the name toolbox here. It's built as a JavaScript application runs in an electron window on VJS and element UI as the components that are in use. The areas toolbox is a minimal or again using that gradient of complexity term thick agent has protocol implementations that are encapsulated into their own components meaning it's it's pretty easily extensible if you ever wanted to add your own features to the toolbox. It also makes use of a feature detection protocol to determine what the connected agent is able to do, and then adjust the UI to to match those detected features. The toolbox as an agent does speak to communication protocols. These are special protocols used for doing debugging and testing and puppet stringing of the connected agent. So using the toolbox today. It will essentially act as a front end UI for the archive back end. So I wanted to briefly describe at a high level. A way of thinking of agent architecture. This isn't the way this isn't even necessarily the way that archive or the toolbox handles things if you were to dig it in and look at the actual software components. So at a conceptual level this is a way to break up the different responsibilities and processes of processing a given message. So all agents will have some form of a transport. This is what we're receiving the raw bites on. Most frequently we'll see that the most popular transports tend to be HTTP. This is not to the exclusion of HTTPS. In fact, I think usually we're using HTTPS and in the agent architecture this receives the raw bites and then passes it on to the next components here. I've used the term middleware to describe this this section of the stack, but I'm not sure I would I would really endorse the usage of that continued phrase. We use it instead to draw some analogies between agent architecture and the architecture of an average web server. We receive a request, we get the bytes, we have middleware that you know adds data to the request, and then it gets passed along to the handler for processing. So agents work in a very similar fashion we receive a message on transport, it goes to the security middleware. And where it is decrypted and the security context is annotated onto the message so once it arrives at the handler we know which cryptographic keys originated the message. It's also passed to a state management middleware where the current state of an interaction is added to the request context, and then again passed on to the handler so that the handler can take the next logical step in the interaction, according to the state to answer a question I seen chat here real fast. The question is toolbox means that it is a controller for a pie. Actually segues pretty well into this portion of my slides here. I quite has all these pieces from our high level agent architecture discussion contained within it. Also exposes something that is generally referred to as the admin API. And a controller is connected through that admin API to the agent. And it is the controller's responsibility to handle the business logic. And this is about what needs to occur next, based on what I as an individual or as an enterprise would like to do next. So, forming new connections would be an action triggered by the controller within by issuing a credential to one of your connections is likewise a an action that would be triggered by the controller. Data is an HTTP API. So if you absolutely hate Python and want nothing to do with it, that's fine. You can write your controller in whatever language as long as it supports HTTP requests. We also sent asynchronously from the agent to the controller via web hooks. So when we asynchronously receive a new credential or receive a new connection request that data is bubbled up to the controller and the controller decides whether to proceed with the interaction or not. So to get back to the question that was asked whether the toolbox is a controller for act pie. The answer is a nuanced. Yes. It is fulfilling the role of a controller in the sense that it is instructing the agent to perform a certain action. The nuance comes in that the toolbox is actually using a plugin which I'll discuss in a moment to do that interaction as opposed to the built in HTTP API. So in addition to the controller, which has its own set of responsibilities as discussed the agent itself then is responsible for the secret storage and management. So making sure our private keys are kept private and that the private portions of our credentials are likewise kept private. It is also responsible for stepping through the protocols defined by date com such as connecting to other agents and issuing credentials. And it is also tasked with interacting with the distributed ledgers. And we'll explore that in a little bit more detail as we go along here. So plugins. I saw a question come up in the chat about this one so hopefully this answers that one. It's an increasingly mature plugin system. It's one of the aspects of my contributions to act by that I'm actually most proud of. I've been really pushing for this plugin system. This allows us to do things like additional protocol implementations, making sure that in order to add such and such protocol we don't necessarily add all upstream to the main act by repo. Can also add new admin API endpoints if you'd like to automate a workflow that you do frequently you can use a plug into add a new endpoint that performs that for you can also add custom event handlers. So instead of making it the controller's responsibility to handle these asynchronous events. So you can put it into a plugin have that occur within the agent itself. To perform that whatever action is desired as a result of that event being fired. There's also the ability to add did resolvers, where it did resolve or is the piece of software that pulls the did document from a did based off of the algorithm defined by a particular did method. A number of resolver plugins in the wild today for interacting with the universal resolver instance for example. There's one built in for did Indie, in addition to the native support that act by has for interacting with Indie D IDs. It's also a an example, GitHub did resolver out there, and there are links to those in the handout if you'd like to explore them a little bit more. You can also plug in message queuing system sticking advantage of the events within occupied to, you know, for example, if you want to notify your agent as a service client about a new connection that was formed on their agent. A plugin can help you do that. So the plugin that we're using today, and this is actually the repo that you've cloned in preparation for this workshop is the areas toolbox plugin. This is an indirect protocol implementation. So we did come admin API for instructing occupy to take certain actions. And this is what we'll be using, ultimately, to use the toolbox with that. So if I bring up a diagram of what's going on here, we have the toolbox. This is what is actually displayed to us. And then we have occupy running with the toolbox plugin alongside it. The toolbox is communicating with occupy using the protocols defined by the toolbox plugin. And then we're instructing the main agent to interact with another agent, Bob over here who likewise is communicating back and forth with the areas toolbox. So just a brief call out here. So we're pleased with where the areas toolbox stands today, but we're already looking forward to a 2.0 to that version. That will help us to better take advantage of new libraries and frameworks that are available today that were not when we originally started the toolbox. Specifically we'd like to build on top of an actively maintained agent framework as opposed to the kind of one off solution we have now for just the toolbox. The prime candidate for that is areas framework JavaScript. So we intend to use react, which just has a little bit more vibrant of a community in general as well as within the areas hyper ledger areas ecosystem. And is our goal to ultimately have the toolbox resemble a little bit more closely the other agent frameworks underneath the hyper ledger areas umbrella, and provide us with, you know, in the future more connectivity and a path forward for did come be to. So there are a number of other code bases as well that I wanted to call out here. Again, if you absolutely hate Python, you're in luck. There are other language frameworks for areas available areas framework JavaScript is one that we work with a lot at in DCO in addition to act by. There is a type script based framework that is pretty close to feature parody there there might be some differences in both directions with ACA PI. So really good candidate there if you're looking for an alternative to act by that better matches your development workflow. There's also a areas framework go. And then differences in feature parody, but quite mature as well. And then areas framework net which is built in C sharp with again a similar set of functionality. Those differences in feature parody. So the next code base we've got here the areas agent test harness. This is a, what do I call it. First thing that comes to mind is an integration test style test harness for other agents but it's a little bit higher than integration integration test level. So using the areas agent test harness we can take ACA PI and another instance of ACA PI, and then the test harness will trigger interactions between the two agents observing the results, and then producing a compatibility matrix between the two agents that were tested for testing against testing ACA PI against ACA PI that's not very interesting. But when we do combinations of code bases such as ACA PI against day of J or ACA PI against day of go. And then going through the whole cross product of that matrix of possibilities, we get a compatibility matrix for the entire hyper ledger areas ecosystem. There is a website that produces reports of that compatibility matrix on a regular interval. I'll have to add that to the handout and I believe that's not currently on there. Other areas agents and services that we have the static agent Python library. I am particularly fond of this one I'm the original author maintainer of the static agent Python library. It holds a little bit of a unique place within the hyper ledger areas ecosystem in that it is, I think, the only library purpose built for static agents. It's especially lightweight, but quite useful we use it for testing other agents all the time. But we've also used it for proxy mediator service and in other contexts as well. There's also the areas mediator service which is a dedicated repo for deployments of a mediator. This is built using ACA PI. And it provides us with that third party that enables mobile agents to receive messages from their otherwise offline. And then the last one is a bit of an oddball. The areas ask our code base. Asgar is a secure storage library. Which is heavily inspired by the Indie SDK wallet. It is different from the Indie SDK wallet. It has seen a little bit more optimization I believe in terms of performance. But it's also a little bit more flexible than the Indie SDK wallet and we anticipate that we'll see more and more adoption of this code base. It is currently pretty well supported within ACA PI you have the choice of using the Indie SDK wallet or areas ask our. But I think we'll start seeing adoption of it in other frameworks such as AFJ. And then finally wanted to call your attention to the areas RFCs repository. The majority of the content that is discussed today is directly available within the areas RFC repo might take you a while to read through all that but it's all there. So the areas RFCs are a great place to go and participate in the development of these protocols, if you'd like, or just to see the specification, if you're implementing one of these protocols within your own agent. There's a good question in the chat that I'll call out here. So, Matthias is saying wondering if there are any light did come for IOT environments. So IOT is a really interesting use case that we have a lot of interest in. The did come be one standard which we'll be using in discussing today has some crypto requirements that make it difficult to implement on IOT devices as of right now. However, in the did come be two efforts, which I think I'll let Sam alluded to a little bit more thoroughly as he gets into his portion of his slides is seeking to address that a little bit more. And even before we get to come be to support IOT devices can still interact in a did come space by delegating to a hub that is did come capable for instance if you've got a really light big device. There are some IOT devices out there I'm sure that are crypto capable that is not universally the case of course. Okay, so protocols and messaging. This is a description of what's going to actually be taking place between the agents as we step through our hands on portion. So we'll start off with what is the communication. I've used that term I think a few times without defining it. A very simple definition would be where verifiable credentials are about a subject about a owner of a DID and identity owner did communication is communication with the subject of that the idea and and verifiable credentials. Did come is ultimately the way for did and BC capable entities to communicate securely and privately, which by extension means that it is the way for credentials and presentations to be sent and received between parties, but it doesn't stop there did come is also just generally useful as a mutually authenticated authenticated secure communication channel between two peers, two or more peers, I should say. And for things like instant messaging relationship based user authentication, or in other words, user authentication that isn't strictly reliant on verifiable credentials to make sure that the person I'm talking to is the one that should be allowed to access this particular user on a site. It can also be used for buying and selling negotiating, whatever. Lots of potential use cases there. I should call that out. Thank you is verifiable credentials. I should probably call that acronym out before using it again. So did come has these five stated goals which have a pretty significant impact on the shape and implementation of did come today. So the first two together we have secure and private. This means that we are confident that the person we think we're talking to is actually the person that we're talking to, or in other words we're confident, our communications are authentic. When our communications are private, we are confident that the information passed from us to the other party is completely opaque to a man in the middle who might intercept that message. And of course, a plethora of options today for secure communication. But the where did come differs from these other solutions is these other secure communication platforms rely on key registries identity, certificate authorities browser or app vendors. You know, or other centralized services to secure the communication. So did come differs first because it's decentralized it is not reliant on a federated service to establish the bootstrapping of the trust and interaction. But it also has the remaining three goals of interoperable transport agnostic and extensible. So interoperability is kind of one of the big pieces that ties into decentralization. A lot in my opinion. So it is a first class citizen goal for did come to be interoperable that means it doesn't matter whether you're using app from such and such vendor with another app from another vendor. It doesn't matter what operating system you're on doesn't matter what the centralized root of trust or blockchain you're using. It doesn't matter what the programming language the agent was written in interoperability of did come means that there is no vendor lock and no lock in based on any of these constraints that we often have to live with today. The reason to that did come is transport agnostic, meaning that it shouldn't matter whether you're using HTTP web sockets, Bluetooth or email, or the ongoing joke has always been carrier pigeon for transmitting your, your did come messages from one party to the other did come is not dependent on those transports to make sure that the messages secured. It's very extensible so this one's a little bit more loosely defined in my head at least. But did come is an open standards that you can participate in the evolution of whether that the adding messages to the standard set of protocols or adding a new protocol together. And to accommodate that goal. Yeah, I'll leave it at that. So as a result of those goals we also have a few other attributes or properties of did come that are worth calling out. So did communication is message oriented with senders with one sender and one or more receivers. The message, the recipient can either take action or not take action depending on the state of the relationship with the sender message, and can choose to respond with another message back to that sender. Did communication is also a synchronous meaning that the participants in an exchange of messages are not required to respond in real time. In fact, it cannot be required to respond real time that is a, a stipulation of the protocol. So you might compare this to email where you might receive a message and it is not at the protocol level or requirement that the person receiving the message immediately responds to that message. Temporal implications as a result of the contents of a message, but that is delegated to the user in the case of email in the case of did come, it is delegated both to did come protocols to determine whether there should be a temporal aspect associated with the messages being exchanged, or to the user to decide when it wants when they want to respond. If the email did communication is routable, meaning that in order to get from a to B, the message can be routed through C. So this means that did communication natively supports mixed network like behavior so if you're familiar with the onion routing network or tour, it bounces messages around between a bunch of different participants in the network to help you traceability between the original sender and the final recipient of data sent over the network. So did come support similar behavior by default with this routing mechanism. It also means that did come can transfer between different transport types at a different hops in the route. So if I have a agent running in my closet for instance, and it receives a message over HTTP. And it is instructed to relay that message on to my mobile phone. It can do that over Bluetooth even though it was originally received over HTTP without any breakdown in the security or authentication associated with that message. So, there are some interactions between these different goals and properties that are worth calling out. First, the security of did come is not dependent on a given transport. I think that's kind of a logical follow on to what I just previously discussed where we can transfer between transport protocols. I think that the security is actually wholly encapsulated within a did come message. That being said, we can use transports and routing to augment the security and usability of our agent systems. For example using HTTPS as a transport. Many of our current agent implementations benefit from perfect for its secrecy which is a desirable attribute for a cryptography. Something I'm not really well versed in the details of but you can look up perfect for its secrecy for more for more information on that. So they benefit from perfect for its secrecy today. Even though that is still a long term goal of implementing natively within did come on the other end of that spectrum though we have requiring specific transports can actually damage interoperability. So if you're such and such vendor and you have this really cool transport mechanism that you just devised and came up with, and you require only that transport. You're out of luck when interacting with the rest of the ecosystem which might be more reliant on, or more frequently using other transports such as HTTP. And then we encourage people to take advantage of these technologies and their special ability to make certain interactions more efficient through the use of protocols or internally within your agent infrastructure. As opposed to using that as the main point of entry into your agent. So that interoperability is not impacted. We've discussed transports at length here. We've alluded a little bit to the IDs and their role in this interaction. I won't go into excruciating detail here. However, I will say that dids are instrumental in securing our communications, especially peer dids when it comes to did com. And these dids have the keys that we use to encrypt the messages associated with them. So the important the IDs we have did communication protocols, which again I've alluded to a number of times. So, moving on to a more formal exploration of these protocols to use a phrase coined by Daniel Hardman, a did com protocol is a recipe for a stateful interaction. And there's underpinning that communication that have their own protocols associated with them, such as TCP HTTP managing dids. You know, with the communication channel used from a user of the indie network uses a different set of protocols as well as different set of protocols for the actual update process like the algorithm used. And then cryptography also has its own protocols in the form of you know, if you home and key exchanges and others. So did communication protocols should not be confused with any of these lower layer protocols instead did come protocols build on top of these lower layers to define protocols with real world social value, such as forming a connection requesting and issuing credentials proving things by telling all that fun stuff. So did come shares some thought space with the world of rest today, which is our primary interface for interacting on the internet. So it warrants a little bit deeper comparison. So rest stands for representational state transfer did come is message based stateful interactions. So rest depends on a client server model of thinking where the server is generally considered to be the source of truth, and the client is dependent on the server and did come. The participants in an exchange are by default peers with roles that are defined by the protocol on a protocol by protocol basis. The role of one particular participant might mean that they are to be trusted to be the source of truth for a particular aspect, but at the most basic level did come does not inherently have that client server model involved in it. So it is a one request one response style system, whereas did come can exchange many messages in order to complete an interaction back and forth between the two parties not just one there one back. Rest uses HTTP methods to indicate which action that should be taken did come is of course not dependent on HTTP as it is transport agnostic. And the action is determined by the message type included in the did come message. Rest uses paths to again indicate action, whereas did come uses these message types for the indication of what action should be taken. And then rest uses paths routed to a handlers and in a similar fashion did come uses message types to route to a specific candle for message. So to have a well defined stateful interaction, a protocol needs the following ingredients, the name and version of a protocol. It's your eye that identifies it. It's messages, the roles that the sender and receiver can take in the protocol. And then essentially what is a state machine of how those messages and roles are expected to proceed. Finally, a protocol also can define the set of constraints that it it imposes on the interaction such as I am requiring that messages received in this protocol must be encrypted and end from an already known sender. It's an already established connection. So let's take a look at an example message here from a specific protocol this is the basic message protocol, the basic message message. And this is the general shape of it so analyzing this message type here we can pick out the different pieces of the protocol described. Basic message is the name of the protocol version was 1.0 unique your eye for this protocol is HTTPS.com.org message basic message 1.0. There is one message in this protocol, which is just the message message, which is perhaps a little confusing. There is a sender and receiver role, there's no real state changes, and the events are just sender receive with both of those resulting in a start to finish just very simple interaction. So this basic message protocol is a very simple or basic if you will, instant messaging protocol for agents. This is more of a toy protocol as opposed to really intending to be the ultimate and instant messaging between did come connected peers. But it is a basic protocol that we can look at and explore and implement early on in our agents development like life cycle to ensure communication is flowing. So in this message that we just looked at we have here at the top, a type attribute. So the type of this did come message is a uri that resolves documentation for this message. But beyond that, it also helps the receiver of the message identify what to do with the message that that routing the message type to handle your aspects that we discussed previously. So in this message type, we've got the HTTPS did come dot org, this portion of the type is referred to as the document uri, and it also serves as an identifier of the body that created or standardized the protocol. In this case it is an official did come dot org protocol. Let's see the first portion of the path following the document uri is the protocol name, which is in this case basic message. The next portion rather is the protocol version. This version is loosely following the semantic versioning specification, if you're familiar with that. I say loosely because we can see here that it is omitting the patch number, which is generally required in semantic versioning patch number would be accepted it's valid. The last part of the path is omitted in this case is where patch refers to a fix to a major and minor version that does not impact the external API. Since did come itself is more or less an external API, the patch number can be omitted without loss of meaning. So finally the last part of the path here is the message type or name. In this case, again, perhaps slightly confusingly the name is message. It will differ for other protocols and other messages. And everything that precedes that message name is referred to as the protocol identifier uri. And then taken as a whole this string is called the message type uri. So the message type dictates what attributes are expected to be present in the message. As dictated by the basic message protocol, we will expect to see a localization decorator attribute a sent time attribute and attribute in the message message. I won't go through all the attributes here but just as a simple example of another type of message that is sent. This is the offer credential message from the issue credential protocol version one. And this, of course, includes our type string that helps us identify what that it is, and what attributes to expect. And then we expect to see a comment a credential preview this offers attached object. And this is again defined by the issue credential protocol. Excuse me. So there are two layers that ultimately combined to form interoperable did come messaging. And the agent message, which is what we've been discussing so far and what is typically meant when we say message, and then the encryption envelope or packed message. An agent message is a message sent between identities to accomplish some shared goal. Again those basic messages those offer credential messages, all those things. An agent message is a wrapper around that agent message that enables the message to be securely delivered and sealed while in transport from one agent to another. These message envelopes encryption envelopes have a couple of different flavors to them, anonymous or authenticated encryption. We typically only really use the authenticated encryption so I won't spend a ton of time talking about that. These envelopes are end to end encrypted. So, even if a message is sent from Alice to Bob through Carol Carol is unable to access the contents of the message can only see the external wrapper. And for those who are aware of the JSON web encryption specification this is a derivative of that specification. And there are ongoing efforts to further standardize this encryption envelope to better align with already existing standards. So if we take that example message again and then transform it into a packed message, the pseudo code operation for that might look something like this where we take the message as our message parameter, and we stuff it into this pack that takes the recipients verification key and my signature key. And verification key in this context refers to a public key and an ED 25519 public or public private key pair, and the signing key is the private portion. There's a little bit more going on in the background here than simply this of course this doesn't get into the details of how the key is transformed into an encryption key from what is typically just a signing key. The details of that can be explored in a little bit more detail at the areas RFC repository. And the results of this pack operation is somewhat similar to this I've truncated a few values for brevity and clarity here. Each of these values are worth at least knowing generally what they are I think so the protected attribute here is a base 64 encoded JSON object containing headers for this pack message inside of the headers you'll see things like the algorithm. That's the portion where you'll see a lot of the standard JSON web encryption pieces. It also includes the recipients and the encrypted content encryption key. The ID is an initial vector that is used in the encryption and decryption of the ciphertext and the ciphertext is the encrypted basic message that we just used as our example. And then a tag is an H Mac of those protected headers to ensure that these headers have not been tampered with in transport. Okay, so with that we have reached our second intermission here. As we return we'll begin our hands on portion with char leading. And then we'll step through forming connections and issuing credentials and all that interesting stuff. We are again pretty cleanly set for starting at the top of the hour so I think that makes sense. So I'll hang around and I'll observe questions in chat. So if you have any other questions that you'd like to follow up on, feel free to hit you up. Otherwise I'm going to go on mute and enjoy your brief break. All right, welcome back everyone. I think we're ready to kick it off again and start moving into the hands on portion of this workshop. So now that we've gotten the background information on the agents areas occupy areas cloud agent Python there is toolbox and did come we're not ready to start up agents create a connection and issue and verify a credential. So in each of these next sections we're going to walk through a demo with screenshots that indicate each step where to click which buttons and what information to enter in, and then we'll transition to a live demo from Daniel walking to the same steps so my demo with the screenshots is more of a quick primer for the live demo you can follow along with Daniel to explore more deeply and demo additional interactions. So the first thing we'll need to do is start up the toolbox and the agents and so you may have tried this out in office hours or price of this workshop will walk through it briefly here. And again the prerequisite document outlines the steps for cloning, both the areas toolbox and the areas occupy toolbox plugin and so if you haven't done that yet. You can follow those in the prerequisites document. So we are going to run through these series of commands to start up the areas toolbox and you can find these on the handout that has been shared so that you only have to copy and paste. And so, in one terminal, navigate to the areas toolbox directory within the get each training directory, run a get pull to make sure that you've gotten all recent changes into your local repository. And then run an npm install to install all necessary dependencies and then an npm run dev, which should start up the toolbox and it should show up in its own window. I show down here on the right. And now that we've started the toolbox in a second terminal will need to start up the agents. And so we'll navigate to the demo folder within the areas occupy plugin toolbox directory within the get each directory. And then run a get pull to make sure again that you've gotten any recent change changes. I get checkout to make sure that you're on the correct HL workshop branch. And then this Docker compose command to build the agents. To get a permissions error. It'd be a great idea to run as administrator with pseudo to solve that. And this should take 10 to 20 seconds and at the bottom it will print a large qr code and invitation URL that should look something like this window on the left. And so now we'll walk through some quick screenshots to show what this should look like. On your toolbox plugin terminal, you'll want to locate the invitation URL for Alice as highlighted here. And then you can copy that and paste it into the new agent connection bar on the toolbox. And so after clicking connect. You will see Alice show up as one of the toolbox connections. And now we will scroll to find Bob's invitation URL. We will again copy that and paste it into the new connection, new agent connection bar. And then after clicking connect. So Bob now is also connected to the toolbox. And again, just a reminder of what Scott mentioned earlier about the agents. These are Alice and Bob commonly named commonly used names in cryptography discussions and they're a placeholder for different entities so it could be a university student a business and a customer could be people named Alice and Bob we've met at a conference and would like to connect. And so with that, I will hand it off to Daniel for a live demo of these steps. Okay, thank you shark. So we will first start off with and hopefully this will help everybody step along and have a little bit clearer picture of what's going on I know there's a little bit of an issue with slightly blurred images but we'll go through all those exact steps here. Yeah. So I am currently within the, I'll start off with toolbox actually just to follow the same order. So I'm in the folder where I've cloned the areas toolbox. This might look a little bit different for you. So we'll start off proceeding into that directory. And then we'll do a quick get pulled make sure that we've got the most recent changes, and then an MPM install to install dependencies. I've already installed dependencies but I'll go ahead and hit run to see if this is a realistic timing for any of the rest of you at least while it's running I'm going to pull up chat and make sure I'm watching that. I'll try to be attentive to chat as we go through these steps if there's anything that is really just not making any sense, feel free to send it there also anybody who's seeing chat and would like to haul her out anything that's unclear let me know. And then we're in the areas toolbox repo that is now done with install will do an MPM run dev now and we should see a window pop up here in just a moment. For any more technical questions if you run into difficulties, I highly recommend directing those to the areas rocket chat channel on the hyperledger rocket chat. And one of our moderators there can help you out a little bit more detail than I can here so we've got various toolbox running. Ready to connect to our agents which we will now start up. So I am in the areas occupy plug in toolbox, I will now CD into the demo directory. And then the command that we want to run here is doctor compose dash F, and then specify our doctor compose configuration that runs both analysis and above agent, and we'll do an up dash dash build for good measure. And to answer the question that just came up the toolbox that I have open here is indeed the tool that we can use for testing and development of agents. So, and why it's useful in just a moment. I got to type in my password here. Okay, so this should bring up a number of services. There are two services for agents and two services for tunnels that enable external connectivity if we, if you want to send an invitation to somebody over rocket chat for instance and have them connect your agent. That is possible. So we have our two invitations and QR codes spat out on the terminal here. So what we're going to use rather than the QR code is we're going to copy the text just above the QR code for Bob first, since he happened to be the last one in. And I'll paste that into the new agent connection box here and hit connect. We see some stuff happening in the background. This is actually the connection protocol taking place. That's enough char will be stepping through that. And just a moment. And now I need to scroll back up and grab the Alice agent invitation as well. Copy that and paste in here and connect. So, once we're to this point we can open up each of these agent windows I'm going to say goodbye to my agent list for now. We have an Alice and Bob window here. I'm going to increase our sizing just a little bit. And put these kind of side by side. So this is the areas toolbox this is the main view that you'll be interacting with when you're connected to a particular agent. There's a number of different features involved here. I'll start off by clicking into the discover features tab or section, never show it to call these section I'll go a section. And this enumerates all the protocols that are supported by the connected agent. And you'll notice in here, a number of credentials that are prefixed with admin in the protocol name admin issuer. Let's see where's another one admin schemas admin dids. There's more in here somewhere. And these are the protocols that the toolbox is detecting. And then based on that support is populating the agent agent portion with. So regardless of where your agent is supporting these admin protocols are not these four basic toolbox to agent tools should be accessible and useful, regardless. So we can send a basic message directly to Alice. This won't really result in much other than a notification from the Alice agent that a new message was received. I can observe that in a different category and I'll explore why that is in a moment. We have a compose tab where you can manually construct an agent message. Say you're developing a new protocol or you just like to trigger one very specific message in an interaction and send it to your agent, you can do that here. It also has the five most recent sent in received messages on the toolbox side. So what's going on between the toolbox and the connected agent. And then we have a full message history, full accounting of what has occurred between the toolbox and the agent. So to briefly explore what's going on in here. So we see the discover features. This is what is triggered by the discover features section. And we sent a query and we received a disclose, which listed off all those protocols for us. What's another good one. We had an admin connections list. That's received. I need to send. Where's the send. Here we go admin connections. Get list queries for the list of active connections on the agent. And then the response. It's kind of a little interlaced here just based on the timing of the messages being received but we see the list of connections. Which is interesting to explore if nothing else. So if your connected agent does support these different admin protocols, you can step into the IDs that allows you to manage your IDs of your agent to some extent. Question just came up, are these messages fake they are not fake these are real messages being exchanged between the toolbox and the connected agent. There's also an invitation section. That allows us to create manage invitations. We have active connections we can see Alice's view of her connections which includes the toolbox, which is what we're actually using to access this view of the connections so we don't want to get to recursive there I guess. But we can see that the active connection of the toolbox is present here. It's also a form here to create a static connection with a static agent we won't be using that today. We can use this to send a basic message I can send one back directly to the toolbox. So what I'm doing now is from the toolbox, telling the Alice agent to send me the toolbox, a message, which is fun. I think, yep, there it is shows up in a slightly different location. This makes a little bit more sense when we're remote controlling and communicating with other connections. And then we have credential issuance, which will be getting into a little bit more I'll talk about this transaction author agreement a little bit more than to mediation and routing will be unused for today. My credentials which list off the credentials received by this agent. And a set of presentations that have been received verification on the other hand is requesting credentials from a connected agent. So these are my attempts to prove something. And presentations are the list of presentation requests I've received. So there's another question in the chat here what are the differences between messages and dids. Good question. These are just an identifier a decentralized identifier which when resolved contains the set of keys necessary to securely send a message between two different parties. A message, a did come message sent between two agents is dependent on their being knowledge of a DID in the wallet with keys associated with it and endpoints so that we can send that message. So with that, we'll now step into the next portion of the walkthrough which will be actually a description of connecting the two agents, and we can take it from there. Thanks Daniel. Are you able to see my screen. Yep. So the next thing we're going to do is create a connection between Alice and Bob. And so they each have their edge agents, like with their their mobile devices and the cloud agents that act on their behalf. And so in this situation we're going to have Alice initiate the interaction with an invitation. And so this invitation is out of band, which means it's not communicated directly through did come protocols because the communication channel has not yet been established and with mobile devices this is often a QR code. But in this demo will be copying and pasting the invitation URL between agents. So when Bob receives this invitation he's receiving from Alice and initial connection key endpoint info routing keys and a label. And so the initial connection key is used exactly once just for sending the connection request message that is triggered by this invitation. And so the info and routing keys are sent to make sure that a message that Bob sends back to Alice can reach her. And the label is who Alice says she is so this is not a verifiable attribute but more the suggestion and a convenient way to identify the information, the invitation. And we can absolutely trusted and identity verification should happen after the connection is formed. So here's what that invitation message looks looks like. Here we have the, we see that the message type is a connections invitation, and then it includes the label recipient keys service and point and routing keys that I mentioned. So, after Bob receives invitation, if he does want to connect with Alice, he will send a connection request back to Alice's agent. So this will contain the label similar. Similarly, it's a suggestion of this identity. So here's Bob's DID, which he will use in identifying his communications with Alice in the future. And it also includes Bob's did doc, which is the document contain relationship keys routing keys and service endpoints all the cryptographic information required to secure future communications and allow Bob to receive messages from Alice. Again, here's the, here's what that message looks like. Now it's a connections request, and it includes label connection. DID did doc, which is omitted here because it's quite long. And a quick note on the language here so Alice initiated the connection but Bob is sending a connection request. Alice can send an invitation to connect with with Bob and give information on how to reach her, but Bob may not send this request to actually connect with Alice after receiving the invitation, since they are peers in this interaction there are no imposed requirements. So Alice sends the same information back from her perspective her own DID, her own and her own did doc in a connection response message. And again, Alice's DID is what she will use in identifying her communications with Bob in the future. And her did doc, which will allow them to communicate securely and allow Alice to receive messages from Bob. So here's again what that message looks like. Now the message type is connection response. And there are also additional identifiers to make sure that Bob can correlate the request that he sent out with this response, and the connection data that Alice sent to Bob is signed using the key that Alice used in the original invitation. So Bob knows that this response came from Alice, the same person who originally created the invitation. Now the DIDs and did docs have been exchanged. This connection has been established. And few quick notes on forming connections using did come. These are peer to peer connections that we're forming. And again they are not necessarily direct. They could use a third party like a mediator is probably a good example. And it is actually built into the capabilities of did come to have a message routed through a third party. But whatever third party is arranged to hold these messages cannot see the contents of the messages since they are ending encrypted between Alice and Bob. As Daniel mentioned earlier, the connection is not a transport layer connection, like socket or open HTTP request, etc. It is a did come connection. And so the agents are connected and interaction is considered complete when all parties here just Alice and Bob have securely exchanged their duties keys and service and points for future communication. And so, see, now we will walk through the steps in the toolbox to connect Alison Bob again apologies for the blurry images Daniel will walk through the same flow. So hopefully that should help you follow along. So here we have the Alice agent and the Bob agent. And we're going to have Alice initiate this. This invitation again. And so we'll navigate to the connections tab on her interface and this will bring a perform to create a new invitation. So we will put in the alias of Bob that just helps us organize this is the invitation that we sent to Bob. And then again the label is the suggestion to Bob of Alice's identity. Then we can scroll down. Well, we'll just want to make sure that auto except is on and then we can create a new invite when we scroll up again. This invitation to Bob should show up here under invitations. And then we can click into it and there is a copy URL button that will allow us to grab that URL invitation URL. And then on Bob's interface we will navigate to connections and paste in this invitation URL. Once we click add, we should see Alice show up under Bob's active connections. And then we can also verify on Alice's side that we see that Bob is one of her connections. And so now Alice and Bob are connected, and I will hand it over to Daniel for a live demo of this protocol. Cool, thank you. Let's pull in my chat window back up to make sure I can address any questions that arise. Okay. As sure noted, we'll first start off on the Alice agent you here. And I'm going to return to a couple of questions that were seen in the chat as well in just a moment. So we will on the Alice agent. Optionally you can put in an alias if you'd like to help you identify a particular invitation. This alias is a name you can associate with the invitation itself. This alias is the value that is presented to the recipient of the invitation. If we leave it blank, it'll default to Alice. So I'm going to go ahead and leave it blank for now. We just want to make sure that this auto accept flag is triggered for now. So we'll create this new invitation. Alice now has it. We can copy the URL. We'll go to Bob's side window here and navigate to connections just below invitations and paste into the invitation URL box here and then click the add button. This will happen pretty fast. We now already see Alice's connection at Bob. I go back to the connections instead of invitations connections. Bob shows up for Alice. And so we now have Alice and Bob connected. So there was a little bit of confusion of what who's connected to who and why is toolbox showing up in this active connections thing and why am I able to message directly with the toolbox. So to hopefully bring a little bit more clarity there I'm going to bring this diagram back in that I shared earlier. To describe where our different components are. So Alice. Or the thing that you can view as. Let me let me try to get my wording precise here. So this window for Alice is a window into the state of this occupy instance here under Alice. When we interact with the toolbox the agent features. The interactions taken place are occurring between areas toolbox and occupy plus the toolbox plug in here. So that's why we can send a message directly to the toolbox the toolbox itself is actually a connection of Alice. And so when we look into Alice and see her view of the world. So we have a toolbox present there. We just have a limited set of functionalities that we support with the toolbox and so we don't do very public initials exchange here. And that's why we've got Bob to interact with. So if I now go to messages, and if I select Bob, and I send a message. I guess I should do something a little bit more interesting than just jargon. Let's say I, Bob, and send that. What's taken place is a command from the toolbox has been sent to the toolbox plug in, which triggered occupy to send a basic message to the connection specified in the command, which in this case was Bob. And so by sending this command along this transport mechanism from toolbox stack by we're telling it to do something between these two parties. So it's sent by view messages on Bob, and then select the Alice conversation I see that message has arrived. So what's occurring on the other side then is Bob, on receipt of a new message, the toolbox plug in captures that event, and sends a new message to the toolbox, which then updates our window here. So that's answered the question of where the interactions are taking place. I see another question here in a production scenario the toolbox will be replaced with a mobile app interface to trigger commands and occupy. I think this is more reflective of a controller scenario so if we think back to the controller which is interacting with that if I over the admin API and HTTP. Typically use that from in the enterprise perspective. That is where occupy as well suited for mobile agents. Usually we would like to have all the data as close to the user himself as possible him or herself as possible. So a wallet is stored on the actual mobile device in the base case. And that's where all the secrets and private values are shared. That mobile agent is then capable of contacting directly the areas cloud agent Python, and then on the other end of that exchange the enterprise user can interact with areas cloud agent Python with their controller. The toolbox is analogous to controller, as opposed to a mobile app interface. However, that is technically possible as well that is not certainly out of the question by any means. Okay, so I'll do a little bit more discussion here in the connections tab so if we expand the Bob list item here. We see a little bit of interesting information which I know has been partially addressed by Sam already. I did for Alice that Bob uses for that relationship, if that makes sense. This is unique from any other DID that Alice has in her wallet. She only uses it with Bob this is a peer DID. This DID is never written to a ledger. Because we, we like to keep these DID private so the context of that DID remains something that is only present within our wallet. This removes points of correlation that can be used to fingerprint attack people and observe traffic as it flows between parties. So by keeping it pairwise peer off ledger. We have better privacy guarantees there. It's also notable that this does not include the did colon method colon prefix. This is a topic of ongoing work within the API community. This is today as a very early entrant into the identity agent game and one of the first hyper ledger areas agents has a little bit of a history behind it that I won't get into too much detail but this is an nd did and nd nim, if you will. And in the very near future these will properly be prefixed and all views of this DID. In particular case since it is pairwise I should clarify this would actually be a did colon peer colon, and then a did value. Okay, with that I'm going to pass it back to shark or our next steps. Thank you Daniel. Great. So, now that we have connected the agents, we can practice issuing and verifying credentials between them. So the issuer is the entity who decides which credentials to issue and to whom you might think of how a university will issue a diploma to a graduating student or how a business issues a credential to employee perhaps to give access to facilities, for example. In order to issue credentials, the issuer interacts with the holder is the person or entity receiving the credential. And they write to the ledger they put certain public information related to the credential on the ledger so that when the holder wants to prove to someone a verifier that they have this credential a verifier can read this information in the ledger to make sure that this credential was in fact issued by this specific issuer. So briefly call out that we are using Indie and on credits. As Daniel mentioned earlier for this workshop. So there are specific schema and credential definition requirements specific to a non creds acopi is also capable of w3c credentials. That's not something we're covering in this workshop. Yeah, just to just to mention that the Indie and on cred specification enables zero knowledge proofs and selective disclosure of credential attributes so it has better privacy guarantees for verifiable credentials and also allows for replication. And so now we will talk about the issuer in context, this, this diagram should look a little bit familiar from what Scott talked about earlier we have the issuer the holder and the verifier again. And so we will focus now on the steps that the issuer takes to issuer credential. So the issuer needs to write a did schema and credential definition to the ledger. And after that, they can sign and issue the verifiable credential. And now the holder has the credential. And so we'll break down each of these steps and work through it in a demo. So, again, this first step. The pre issue and steps are to anchor the public did the ledger, write a new schema to the ledger or alternatively. You could retrieve a pre existing schema from the ledger, and then use that schema to write a credential definition to the ledger. And then finally issue the verifiable credential. So we will start with this first step of anchoring a did. This is just briefly reminding us of what Lynn talked about earlier with the endorser and author roles again the endorser can has privileges to write to the ledger the author, authors the transactions. In this demo, our issue will have both the endorser and author role so our issue where we'll create the transaction and then endorse it and write the ledger as well. And we also need to sign the transaction author agreement, as Lynn mentioned earlier. So now we will navigate back to the toolbox. And in this interaction Alice is acting as our issuer. And so we'll navigate to the dids page on Alice, and then we will also pull up the in DC or self serve website that Lynn showed us how to use earlier, but we can get an in DC or endorser. We want to use the test net to sit because this is a demo. And then we will copy and paste the DID and the verification key into the self serve website, like so. And then after clicking submit, we will just make sure that we've gotten a 200 okay response. And then back on Alice the issuer. We can activate the dead. So we can either use this activate did button, or we can select the did that we want from the drop the menu at the top. And once we do that, this did will show up. This is our registered dead. And so now I will turn it over to Daniel for a live demonstration of these steps. Okay, thank you. So to start us off here. I'd like to again call your attention back to the handout slide. This will have a couple of links that we're going to be using in this interaction that will need to pull up. Again, drop this into the chat to make sure everybody's got a handle to it. So we'll start off on the Alice side which is going to play the role of issuer in our exchange that we're about to trigger. So we'll first go to the IDs so we can create a new public DID for Alice. This will trigger the transaction author agreement page coming up. As this says this module requires signing of the transaction author agreement test will be influencing the writing of transactions to the ledger with this module. And this will bring up the text of the transaction author agreement. You can read it if you like having really engaging reading on legal stuff. I'm going to go ahead and hit accept, having previously reviewed this document and hit submit. And now we are able to interact with this DID's section here. So I'm going to create a new DID. I'm going to give it a label public just to help me easily identify that this is the DID that I've written to the ledger. So to clarify though this is only creating the DID in order to actually get it on the ledger will be taking the steps that are described going to the self serve application. So back to our handout page here. I'm going to click on the DCO self serve link and the other resources used section, which will bring up the self serve app. So I'll jump back to Alice expand the public DID I just created so I can see the attributes of the DID copy the DID value out here. Paste into this field here, and then copy the burr key or verification key value and paste here. I'll hit submit. This is a status of 200. Okay. And a message saying that we've successfully successfully written our name identified by this DID to the ledger with role endorser. It's also really important that you use the test net as opposed to the demo net because that's what our agents are connected to so if you've accidentally done demo net and hit submit already. You can do the same process just with the test net and it should be fine. So returning to Alice. We now need to make sure that this DID will be used for future registered ledger transactions. So we click the activate button. And then the DID in question also appears at the top here. So this is a quick way to switch between other DIDs if you'd like to. But that DID corresponds to the one you there. We are now prepared to write transactions to the ledger as we continue with our credential issuance process. And with that I'll turn it back to Shar for our next steps. Great. Thanks Daniel. Okay, so the next issuer step is to write or retrieve a schema. And so a schema is the data organization structure that defines the attributes in a credential. For example, in an employee credential, the schema would like we have first name, last name address, data of hires, security clearance, anything that any of the information that the employee that the employer wants to verify about the employee employee. And so this is what the JSON of the schema looks like it has a schema name version ID and shows the attributes here that's just first name, last name and age. And after we have written the schema to the ledger, we can also see it on the ledger using the indie scan tool that Lynn demonstrated earlier. And if you are retrieving a pre-existing schema, this is where you want to retrieve that schema ID. And so for a quick demo of how to create a schema in the toolbox, we will navigate back to Alice the issuer. And now we are going to click on this credential issuance section. And then we will click create under schemas. And this will bring up a window to create a new schema. And so you can enter in a name, and then a version number, and this has to have either one or two decimal points so this could be 1.0 or 1.0.0. And then you'll want to add in attributes. You can keep it really simple with name and age, and these must be all lowercase and if they include multiple words, they must be connected by an underscore. And so once you click create or confirm, you should be able to see the schema listed here under your schemas. And so now that we have created a schema, we can use this to write a new credential definition to the ledger. The credential definition creates the issuer specific cryptographic keys necessary for issuing a credential. And so while a schema is the collection of attributes in a, it is the collection of attributes in a credential, it is not associated with any entity. And so the credential definition is what binds a schema to a specific issuer, so that when the holder is then presenting their credential, the verifier can see. What was issued by the specific issuer. And so, creating the credential definition generates cryptographic keys for each attribute with both a public and a private portion. And the public portion is stored on the ledger with the schema. This is what allows the verifier to verify that the issuer issued the credential. The public key portion is maintained by the issuer and the issuer uses these to sign each attribute. And they need there needs to be this key pair for each attribute because as we'll see later a holder might choose not to present all of the attributes in one credential they can choose to only reveal a subset of the attributes in one or more credentials. So we can see the JSON of this credential definition ID we see that it has the credential definition ID, and then the schema and attributes that it is associated with. So now we will do a quick toolbox walkthrough of creating a credential definition from the same schema that we just created. So under credential definitions we will click create. So bring up a window that allows us to choose from a drop down menu of our available schemas. And here we just have the one so we will confirm that. And this should show up under our credential definitions and this calculation can take a few moments so it might not show up immediately. So with that, I will hand it over to Daniel for a live walkthrough of these steps before the final credential credential issuance stuff. Okay, thank you sure. I'll start again with a brief call out to some more resources within the handout. So we've discussed a non creds and the revocation. Well, we've alluded to revocation rather than there's been a few questions relevant questions in the chat regarding revocation. So we can find more details on each of those topics within the additional reading in the handout. While we're here I'll also really quickly bring up the end scan page that we can actually view the transactions for on the ledger. So if I go to domain. I'm going to do a really quick scan for my DID that I just anchored to the test net over here so that's a DZD. Let's see if I can spot it. Sorry. So DZD ledger written to the transaction six minutes ago. So this is a an interesting way to explore the transactions as they arrive on the ledger as Lynn very aptly pointed out earlier. Now as we're creating DIDs here with toolbox and anchoring them with the self serve app we should see them showing up there. Same thing with the schema that will create in just a second. So I'm going to do that now. And then I'm going to take everything over to Alice going to credential issuance, which is about in the middle of the list here. So I will create a schema. Going to do, what am I going to do? I'll start off with something really simple just to get something quickly down and then we can come back for some more realistic credentials in a moment. And they'll say name and age I guess. Basic attributes in this credential. In reality age would probably be date of birth, as opposed to the number age because it would not be relevant for forever. So I'll go ahead and create that schema though. And then just for the fun of it, I'll see filtering by schema on any scan. I can hopefully spot my transaction. I've gone too far back. I don't see my transaction. Maybe I need to refresh. No, I don't automatically. Well, I'm not sure why I'm not seeing my schema. Maybe I need to come back to that. So the refresh on India scan takes a little bit. Okay, gotcha. But that just does take a minute sometimes to do the refresh and pop up at the top of the list is where it should show up right after you've made it. Cool. Good to know. Thank you, then. I'm just being too impatient, I guess. So we'll go on to our next step here then, which is creating a credential definition from our schema. If you want to inspect the schema, you can see that it's got our name and age attributes can see the schema ID. This is the ID value used to identify the schema, both on the ledger and within acupuncture credentials. And there's a little bit more in depth detail that you can expand in that JSON body as well. So if you create a credential definition, this is very simple, we just select the schema and say confirm. So as Shar pointed out, this does take just a moment. The reason for that being that it's generating RSA keys in the background for you to those attributes so it can later use that in the credential issuance process. So for those who are unfamiliar with RSA keys, they're relatively computationally intensive key to generate. So it takes just a moment there. But now we have our credential definition. Sometimes, if this is being a little slower than usual, you might want to hit that refresh button just to see if we happen to come out before we actually receive the data back. So refresh would help with that. So we now see our credential definition. The credit ID here looks very similar to the schema because we are both the issuance or issuance of the wrong word author of the schema as well as the credit ID. So it is prefixed with the same value. So it is our DID I should call that out a little bit more explicitly. And then there's some version information, the name of the schema, and the schema version. In this case, we have an indication that it is using CL signatures. I believe that's transaction number, and then the name of the schema, or a joint name from the name version put together. Let's see. So with that, we are now prepared to issue a credential to Bob. But before we get into that process, I'll hand it back to Shire with the discussion of the next steps. Thanks, Daniel. Great. So, back to the same diagram just to make sure we're oriented in the steps. We have written the DID schema and credential definition to the ledger and now we're ready to sign and issue a verifiable credential. And so this returns to those issuer responsibilities that we talked about a bit earlier of deciding who is issued which credentials so we'll need to select the connection to be the holder will receive this credential. We'll select the credential definition to use to create the credential, and then we will enter in the values for the attributes of the credential. The interaction can be initiated either by the issuer or the holder, which will show in the message flow that allows the credential to be issued. And so, like I said, the holder could start this interaction by proposing a credential. I would send that to the issuer which would trigger an offer credential message sent to the holder, or if the issuer is starting this interaction, they would just offer the credential and then the flow is the same no matter. No matter which party initiates the interaction. So once the holder receives that they can request credential. And then, once the issuer receives that they will issue the credentials to the holder, and once the holder receives the credential they will send an acknowledgement back to the issuer. And then their side of the interaction is complete. And when the issuer receives that acknowledgement as well, their side of the interaction is complete too. And so a quick note that this language is a bit awkward and not how people would naturally talk about this interaction. But this is the language that is used in the messages that are sent back and forth to enable this credential issuance so we're just using it to show the developer language around this message well. And so that's the toolbox will quickly look at how to actually issue a credential. We are still on this credential issuance tab on Alice the issuer. And so now we can click create under issued credentials. And so this brings up a window that allows us to issue a credential. We want to choose the connection. We want to choose Bob, and the credential definition on which to base this credential, and then we can optionally add in the comments and after selecting the credential that mission. This brings up the attributes associated with that credential. So, maybe Bob's full name is Robert and that's what should be on the credential. And so we can just fill in those values. And confirm. And now this issue credentials should show up here, or Alice, or Bob the holder, we will look for this credential under my credentials. Where the holder holds their credentials and we see that this credential has appeared. And so to reiterate a verifiable credential is assigned statement with attributes about a subject, and it contains metadata about the credential attributes about the subject, and the, the proof the signature from the issuer. And so with that, I will hand it over to Daniel for a live credential issuance. Thank you. Okay, let's see. Alice our issuer again, we will remain on the credential issuance tab. And we will click the issue button. This will prompt us to select a connection. We've only got two in our region right now toolbox or Bob, we want to send it to Bob the toolbox won't know what to do with this, these messages. And then we'll do a credential definition. And this is our credential definition for the test credential that we just used previously to create. I said that a little backwards, that is the credential definition we just created a moment ago. We can often only add a comment. And then we're going to add some values for attributes that we're issuing in this credential. In this world, this would obviously look a little bit different if we actually wanted to emit finding credentials, we'd go through some process to actually verify the unit, the identity of the person on the other end of the connection. Whether that we use analog credentials for bootstrapping into that system or not, or some other mechanism. Usually we won't just say yeah here's some attributes that I came up with in this process. And hit confirm and this will trigger the credential to be sent from Alice to Bob. We now see that it is showed up in the issued credential side on Alice if we go to my credentials on Bob, which is just a little bit up from the bottom here. We see the credential issued here as well. So is credential act state, meaning that the credential exchange has completed. Oh, my screen just said participants can now see my screen so hopefully you've been seeing all what I just did. Shout out in the chat if you have not. And I can repeat a few steps. So now that the credentials issued what actually took place here was first, the message to instructing Alice, I should bring up my diagram again just a moment. So first, the toolbox sent to the command to issue the credential that was received by the Alice ACPA agent, who then issued the credential to Bob, who then notified the toolbox that the a new credential offer had been received. The ACPA instance we're using has been configured to automatically accept those credential offers and in normal circumstances that would normally be something we'd allow the user to decide whether they're going to accept the credential or not. And then a credentials exchange took place between the two parties and this was the exchange that Shahr described. The credential offer was sent credential requests was sent back from Bob, and then issue credential message sent from Alice to Bob, and then credential acknowledgement sent back to Alice. The issue credential protocol is pretty well defined you can find it in the areas RFC repository. But I think it's worth briefly calling out that the in India non creds workflows. We start from either proposed credential or offer credential. And that's because there needs to be some exchange of blinded secrets between Alice and Bob in that process. Other credentialing systems might have fewer requirements on the number of messages exchanged back and forth, such as json of the base credentials. However, that does come with the downside of json of the credentials you lack private holder binding, which is the ability to say that this credential and this credential. All distinct objects can be strongly correlated back to a single holder for both credentials if they were issued to the same individual. Mateus and chat brings up a good point that I'll bring up as a brief aside so all this communication taking place between Alice Bob and toolbox is taking place over an HTTP connection. It's actually between the toolbox and Alice it's actually a WebSocket connection, but it is taking place over HTTPS or WebSockets with SSL. And same between Alice and Bob. So while did come provides the security necessary to ensure that Alice and Bob are the only ones able to do those messages. We still strongly recommend using secure transports. Especially when we're talking about the internet here where anything could happen. And I will not even pretend to say that did come security has been as thoroughly vetted as HTTPS. So again, taking advantage of the transport to augment the security of our system is wise. Okay. Thank you, Matthias. Matthias, I might be pronouncing your name incorrectly I apologize, but that was a good good point to call out there. So we now have issued a credential from Alice to Bob. And another thing that comes to mind that bears mentioning is in India non cred systems. There is no indication on any ledger whatsoever that this credential was issued from Alice to Bob. The only records on the ledger are that of the schema and the credential definition. Everything that takes place between Alice and Bob when issuing a credential is a private interaction, which is not publicly auditable in any way, at least via ledger. We're strongly discouraged to ever put personally identifying information on a ledger as Lynn described earlier when he was talking about the transaction author agreement. It is in fact against the transaction author agreement to include that sort of information on the ledger. Tomorrow brings up another question the schema and credit are synonyms aren't they shorted a really good job of explaining this so I'll try to reuse the same words that she used. So the schema is a collection of attributes that we anticipate being contained within a credential. It is however not bound to any one particular issuer. The credential definition is a separate ledger object, which does exactly that it performs the binding from attributes to be issued to a specific entity on the ledger. So the two are not technically synonyms the schema is just attributes is a listing of attributes and the credential definition contains the DID of the issuer, who will be using that credential definition, and the public values of the keys to issue the credential ultimately. So it's it's through the credit depth as opposed to the schema that we're actually able to perform verification later on using the ledger as the source of truth for those keys. Another question that has come up. How does the requester view the credential issuance if it's not viewable in the ledger. I'm not exactly sure which party you're referring to when you say requester. But I will say, so did come as a peer to peer interaction already establishes that there is a clean and secure communication channel between two parties. So all information is transmitted over that channel. It is then the responsibility of the recipient of the credential to make sure that they correctly store that credential in their wallet for future use. And then if the issuer would like to keep track of some data about the issue of the credentials it's issued just for its own purposes. It is permitted to do so in that issuance process. So there's no external system no external database to these two parties that has any record of that though, since it takes place between the two. Okay, with that, I think we are ready to proceed on to the next step which is verifying presentations. So I'll turn it back to char. Great thanks Daniel. So now that we have issued a credential we can now practice verifying it. Whereas the issuer had to decide which credentials to issue whom the verifier must decide who and what to trust. And so to do that, they will interact with the holder. Read from the ledger perform cryptographic computations to verify that the attributes presented were issued by that specific issuer. One thing the issuer does not do is interact or one thing the verifier does not do is interact with issuer. So the verifier is only able to read the information written to the ledger by the issuer when the holder approves the verifier to do so. And so for the sake of simplicity and reducing the number of windows to navigate on a screen we're having Alice play the role of both the issuer and the verify in this demo. And roles are not set permanently between parties if the role is unique to the interaction. And so this works. This works just fine and it is also entirely possible that the issuer is the verifier. For example, if an employer issues an employee credential, they might want to verify that to identify the employee. I also just want to emphasize that the issuer and the verifier can be different entities who do not interact and cannot gather in for exchange information about the holder without the holders knowledge or consent. But they are able to trust each other, as Scott mentioned at the beginning of this workshop, because they write and read from the ledger. And so we will return again to this diagram, where we saw the steps for issuing a credential but now we'll move to the verifier side. And so first a presentation must be created. Whether the verifier requests presentation or the holder offers presentation, we're going to focus today on the verifier requesting presentation. And then upon receipt of the presentation the verifier reads the did schema and credential that mission from the ledger and verifies that to stations. Again, we will show the flow of messages necessary to verify a credential and again this is more of a developer friendly language than the way that people would normally talk about this interaction. We want to show what the message flow looks like. Finally, the holder can propose a presentation, which would trigger a presentation request message from the verifier, or if the verifier is initiating, they would start this interaction with the presentation request and then the message flow is the same regardless of who initiated. And so if the holder wants to approve this request they can do so and then send a presentation back to the verifier, then the verifier must verify the presentation and then they will send an acknowledgement back to the holder. Once they've sent their acknowledgement their set of your interaction is complete. And the holder receives the acknowledgement and the presentation interaction is complete. And so to verify a credential, we will have the verifier request a presentation from the holder and indicate which attributes from which credentials should be presented. So back to the toolbox where Alice is now acting as the verifier. We will navigate to the verification tab at the bottom left of Alice's interface. And we will choose to request a presentation, which brings up a window where we can choose the connection we will request a presentation from the holder Bob. We can add a name to the presentation and optionally add a comment. For example, we can add in the attributes that we want to verify, and then we can select the credential definition ID under restrictions if we want to say we don't want to verify the value of this attribute from just any credential, but only from this specific credential associated with this is this credential definition ID. If I go to the bank and need to verify my identity to access an account, the bank would likely accept if I presented a the name on my passport, but not the Duncan Donuts rewards card that Scott mentioned earlier. So for this simple example, we will just verify both of the attributes name and age from the specific credential definition that we created earlier. So we can create that verification request and now we see that this presentation was requested from Bob. So now on Bob's window, we can navigate to presentations and we see that is this presentation shows up under his list of presentations and in the same way that the issuer issues credentials from this credential issuance tab, and then the holder accesses them and holds them in the my credentials tab on the verification side of that the verifier uses the verifier tab to verify request verification and then those show up as presentations for the holder under the presentations tab. So before a live demo of verifying a credential will just talk about a few different types of presentations. So we have just shown the simplest example of a presentation which is called a full disclosure presentation in which all the attributes are revealed from one verifiable Another type of presentation is the multiple credentials presentation or the composite presentation in which all the attributes are revealed from two or more credentials because one single credential doesn't contain all of the required attributes. So this, this might occur when multiple specific identification documents are required like a passport and a piece of mail to your address. Let's see then selective disclosure is when only certain required attributes are revealed from one or more verifiable credentials. So instead of revealing the entire contents of a credential, you have the option to only reveal a subset of those attributes. And then a predicate proof presentation is when we only reveal a true or false value a common example is age the verifier does not need to know if you are 22 or 60 if it only matters that you are over 21 years of age so this is a great way of preserving privacy while verifying necessary information. And then lastly, revocation just been mentioned a bit a few times is the ability to revoke a credential. For example, if a credential expires or for whatever reason the holder is no longer eligible. So we will not be covering this today but I'll just briefly mentioned that revocation is supported by the Indie and non-cred standard and I spent significant work put into supporting it with an archive as well. So, with that I will pass it back to Daniel for a live demo of verifying a credential. Cool, thank you. Okay, so starting off again with our Alice and Bob agents. We've got Alice. Great, but Charlotte just described a moment ago. Alice was our issuer in our original exchange, but we're now going to switch roles and Alice will become a verifier. I've already said all of this, but the roles of a particular participant within an India non-crit system is dynamic. Yeah, you can be an issuer you can be a holder and you can be a verifier all at the same time. But it is important to note that the fact that Alice issued the credential that we're about to verify is not significant in this exchange. Alice is still using the values written to the ledger to verify the presentation that Bob sensor. However, it's also valid to call out that Alice being both issuer and verifier is also a perfectly valid use case. In the context of, for example, if I am an employer and I issue a proof of employment credential to my employees, and then later when they come to the employee cafeteria if I want to offer them a discount based on proof of employment they can present their credential, proving such, and then give them a discount on their sandwich or whatever. Because I as an employer was both the issuer and verifier that was not instrumental in that exchange it actually helps decoupling systems within an organization I don't have to know what software issued that credential I just know how to need to know how to verify it later. Using the same decentralized route of trust in the form of the hyperledger and you distributed ledger. So with that discussion out of the way, reiterating some of what char said already will now go to the verification section on Alice and request a presentation from one of our connections. Let's go with Bob, and we'll put some name to this I'll just say test for now and since I've got only a very simple dummy credential to work with at the moment, you can optionally add a comment. Now ask Bob to reveal some attributes about himself and name and age. I'm cherry picking these of course because I know Bob has those attributes and is in his wallet as issued by a verifiable credential. So now I'll go ahead and hit confirm. And in just a moment, we see the presentation in its final completed state verified with the verification coming back as true. We can also see the attributes that were revealed as a part of that verification process, which in this case was the name of Bob, and the age of 42. Oh, I think this is pretty obvious but just in case it isn't the verified of true is a cryptographic verification. It does not necessarily mean that these values meet some other business logic determined threshold or requirements about these verified attributes. Layering up above occupy, we would take this verified status of true and then combined with the revealed attributes determine whether we want to proceed and whatever other business process that we were in the middle of engaging in with this. Our other connection. You can also explore in a little bit more detail what's going on under the covers. So we see the presentation request. Let me, let me pull up this diagram for a refresher of the process that took place between the two though. So Alice was the verifier over here, Bob was the holder. We began with a presentation request as opposed to Alice beginning with a presentation proposal. So the request was sent to Bob, and Bob, whose agent is currently configured to automatically approve any presentation request approved it automatically and then sent the presentation, the verifier then verified it and then sent it and acknowledgement back to Bob, entering complete state for both of the parties. So in a typical exchange this would again be something that would either be decision made based off of a business process. The business rules automation system or whatever might also be defined by a machine readable governance framework, which verifiers we're willing to present what information to might be something that is well defined within your legal jurisdiction. Or it can just be left up to the user directly. This connection would like to see the these attributes about you are you okay with that. You can have the ability to select which credential credentials to supply the attributes from if there are multiple credentials in their wallet, fulfilling the same attributes. So after deciding that process deciding which credentials to use in that process, the holder since the presentation and the verifier checks it. So we should be able to see evidence of each of those steps in this object so we see our presentation request. We see our requested attributes here name and age, I said those backwards there. We could have supplied some restrictions, you can say that it must be from a certain credential definition ID or a certain schema, or a certain issuer, which is more or less the same as a credential definition ID, but same difference. We can also see that we did not request any predicates. We see some occupy specific things. We see the actual message that was sent as the presentation request here. So this is the format that it takes. Not quite over the wire since this would be encrypted, but as delivered from Alice the Bob at least. And then it is a present proof one that request presentation with an attached request presentations object. And this is a generic way of stepping credential system specific data from verifier to holder in order to communicate the credential system pieces. So in the case of India non credits that was of course, you know specifically which requested attributes, in which predicates a w3c credential system wouldn't look quite the same there, of course. Then we can also see the presentation that was sent back from Bob. We see the raw value of the revealed attribute as well as the encoded value this was the value that was actually used in the cryptographic verification process. And the age with its encoded value since it was an integer value already. It remained an integer value. And then there's also opportunities for doing cells with tested attributes and occupies support for credentialing and predicates. We also have some unique identifiers here. This is not the complete set of the presentation. There are incredibly large arrays of numbers that are actually omitted from this data structure so this is not perfectly representative of what data is sent back between the two buffer peruse ability at least this is the information that's included at this point. The attributes are the key attribute that are missing from this data structure, which again are pretty large sets of numbers with attributes associated with them to determine how they should be used in the cryptographic verification process. To briefly call out a question that was answered in chat already. And then the graphic solution is being used. So, indeed, a non creds takes advantage of the, let's see if I can get the name right, can manage the, I forgot the L already. Sorry, whoever L that was and who came up with that signature scheme, but there are more details in the handout. The specifics of that CL signature suite. If you want to dig into that crypto that's your cup of tea. There's also more details on the India non creds specification at large within the within the handout. There is a push on going for the ability to use different signatures within a non creds. That's pretty early on. It's just a twinkle in somebody's eye at this point honestly. But we anticipate that bbs plus signatures should be able to support the same level of functionality that CL signatures are supporting. And thanks for the person that put the full definition of CL in there. Okay, I see a question here. How did Bob confirm the presentation request. Yes, I should call that out so I described the process where Bob would approve which attributes were revealed to the requester. For the sake of simplicity within the toolbox and within this demonstration that part has been automated out. So it just took place in the background without any intervention on our part. But within the admin protocols used by the toolbox. There is actually a mechanism for determining which credentials we would like to use and sending it on to the plugin. The toolbox itself is just assisting and automating that process. Believe even if we check our message list. I apologize for the background noise there's a yard work going on out right outside my window so hopefully that's not picking up to clearly on my mic. I feel too bad we can't hear good. I might have clicked away too frequently for it to be visible here. But on receipt of a presentation request, the toolbox plugin will send a message to the toolbox saying a presentation request was sent in. It's requesting these attributes. These are the credentials that would fulfill these attributes and gives them the opportunity to select which ones. So, but as I said, the toolbox does not currently have a UI to expose that functionality right now. Cool. Now I'd like to step through a couple more interesting scenarios when it comes to credentials and presentations. So we've issued this really basic credential. I should go to the credential issuance and look at there with just a name and an age to Bob with age of 42. So I'm going to add another schema and credential definition to the next year. So going off of my employee cafeteria example from before I'm going to go with employment potentially. And at the same time as I'm typing this out also called out that schemas are separated objects from credential definitions for more than one reason, but one of those is reusability of schemas. So within a governance framework, it might already be well defined what attributes we expect to be presented in certain contexts. So a machine readable governance framework might include the schemas that we are willing to accept credentials from and accept presentations from and it's really noisy. I'm sorry. So, while I'm coming up with a new schema for employment. It is certainly possible and maybe even likely that within a certain legal jurisdiction there's already a well defined set and instead of creating a new one, I should go and reuse one that's already available on the ledger. So I'm going to create one and say, I won't include name, just for the sake of demonstration but I'll say data hire and add another attribute and say, access level, I guess, if they've got a specific security clearance. What else should I include on here. I'll keep it simple and keep it to those two attributes for now. So go ahead and create that schema. And then create a new credential definition from that schema. Again takes a hot second as it performs that key generation in the background here. Also writes to the ledger which is a not instantaneous process I suppose. And there's our credential definition and we see the attributes data fire and access level. So now go ahead and issue the credential to Bob, using that employment credit def ID. This form is automatically populated with the attributes that we need to give as part of that credential definition. Add a comment here of proof of employment or the heck of it. And say data fire is, I don't know 2020 January 1. Good question in the chat. Can you edit schemas or just create new ones with another version. And the answer that came back is correct. You would just create a new schema with a new version. So the schema will remain on the ledger and remain in use and it kind of becomes a business policy decision to say, whether you should be using one versus another, or governance decision as well. Continue with my credentials here saying access level is basic. And our credential is now issued to Bob at least from our perspective on Alice here. And we see the attributes that were issued our data fire and access level basic. If I go to my credentials on Bob. I see the new credential has been received from Alice and see those same attributes. I realized now I didn't explore at all previously what values are returned in this JSON structure here. So that might be of interest you can see in the raw credential attribute we see the schema ID and credit ID. We see the values with their encoded representations. And we see signatures that were used to sign those attributes. Honestly couldn't tell you exactly what all of these different attributes are. That's above my head. But these are pretty well defined in the non cred specification. And then there's a signature correctness proof as well. Let's see if there's anything else interesting look at in this data structure we got the credential request metadata, which has a master secret associated with it. That's an interesting one. So I alluded to the concept of private folder binding previously. In the non cred credentialing system. There is a master secret associated with a holder of credentials. That master secret is blinded and presented to the issuer of the credential in order to issue a credential specifically bound to the owner of that secret. And this is what we use in order to say that this credential over here and this credential over here. And this will actually be crucial in the next steps that we're going to go through our verification process. As a result of now having multiple credentials to prevent present attributes from. So if I go to verification on Alice. And I'm going to create a new presentation request for Bob and say, This is the proof for the employee discount in the cafeteria. I won't add a comment I guess I'll just leave it as simple as that and then I'll say, give me your name and give me your date of hire. And it must be from this specific employment credential that I issued earlier. I can add that restriction, or otherwise it would be up to the holder of the credentials, if they had another credential with a date of hire and it to swap it out for that one so we want to add that restriction here. So then I go and hit confirm. And we see the proof pop up and we get the check mark meaning that it verified as true and completed successfully. So we're able to see that Bob has data hire of January 1 2021 or 2020 rather, and his name is Bob. So we're good to go ahead and give him that employee discount for a sandwich. So we just use two different credentials and as I was talking about have strong guarantees that those were issued to the same individual. They came from different credentials. We've, we've got that cryptographic assurance taken care of. And if I can view the same presentation from the other side as well. So that's one interesting capability of the non cred system. That has been discussed in the W3 CJ son LD credentials context, but has not yet come to fruition in any implementation that I'm aware of. So the BBS plus as a signature suite supports the sort of functionality. So in theory, using BBS plus signatures and W3 C credential format, you could accomplish a similar outcome. But the Indian on credits has that all built in and is quite mature and it's implementation of doing so. Let's do one more verification, or at least one more, maybe I'll do another one after this will see. And now we're going to demonstrate predicate proofs here. So this time I will say, and Sam has clarified. In the chat that BBS plus allows for selective attribute disclosure, but does not have built into it predicate the ability to do predicate proofs using zero knowledge proofs. Blinded holder binding. I'm not sure if that's actually accurate or not Sam but I'll defer to Sam on that one and say that he's probably right. Okay, so we're going to request a predicate proof from Bob for his age greater than or equal to 21. So that Bob can buy some age restricted product. I like that terminology that Scott was using earlier so I'm going to steal that. And we're going to say that it's going to be a requirement to be from this credential, as a stand in for being perhaps a government issued ID from, you know, be as a driver's license or some other issuer that we're willing to trust to have verified that age accurately. We don't want it to be just any credential as char was pointing out. We don't really care honestly if Duncan donuts thinks you're above the age of 21, we want to know that the DMV or the government has gone through the vetting process of making sure you're above that age. So I'll hit confirm on that credential. And we see that it has also verified is true. We see the predicate of age greater than or equal to 21. So we are comfortable going ahead with saying Bob is able to purchase this product or whatever. So there's a couple of questions that came up in the chat, blinded holder binding and what is meant by that. So blinded holder binding is is the process I just described in the Indian non cred system, where there is a secret, which is used to help correlate that two credentials were issued to the same individual. The blinded aspect of it being that that secret is never, of course, given out and it's unblinded forms any other party because it's a secret. So it just helps with the ability to determine that that scenario to credentials to the same individual. I've repeated those same words multiple times so if that didn't come out any more clearly. Feel free to ask more clarifying questions in the chat and I'll give somebody else an opportunity to do a better job. If Alice weren't the issue or two could restrictions show the credential that missions. That's a good question in the UI. So if Alice was not the original issue of the credential. The UI is currently structured such that if we want to impose one of those restrictions we need to know about the credit ID already. So in the credentials tab, and this is something we haven't used yet here, but there is a mechanism for retrieving. That's a schema. I need to go to this one to actually under my credentials. I'm going to go ahead and click for the bait and switch there. And we have the ability to retrieve a credential definition here. Probably not ideal placement in terms of UI. But if I go to Indie scan, for example, and I grabbed the most recent schema, or rather, let's see where, yeah, employment. That's one I just issued. Let's go with membership. And go in here and then I take the credit ID and then paste it in there and say retrieve. I haven't done this process recently so I hope it works. It did. Okay. Good to know. So we see that it has the level and date. And then within the verification tab, we can, oops, on requesting an attribute, we can use that credit ID. So that process that I just went through is more or less a manual discovery process for already out there credential definition IDs. And I think it's more common that instead we would see those be included in a governance framework as being well known values for a given credential, such as a driver's license credential from the government having jurisdiction in that whatever area you're in. So yeah, there's better systems than just using any scan, but any scan is a good analog for us to manually retrieve that data and use it in the toolbox. I believe that about concludes all of the hands on portion that we had planned for today. Unless there are any more questions that people would like to have answered I'll leave just a minute or two to see if anything comes in in the chat. And thank you Sam for further clarification on on the blinded holder binding. Okay, a couple of questions. Unfortunate scenario of rocket chat not being not working and couldn't establish connections between agents. Hopefully rocket chat works eventually, but you're also welcome to contact us at in DCO if you'd like some assistance working through these. And we can provide some basic support at least and help you get through the demo on your own. Another question that came in here is mediation part working in the toolbox. That's an excellent question so mediation. This was added to the toolbox at the same time as it was being developed for the areas cloud agent Python code base. So these two sections allow us to request mediation from one of our connections from both the toolbox and the connected agents perspective. So I could using this tab, say request mediation from Alice and if Alice was a mediator agent, then my toolbox instance within start using Alice as the mediator of its connections. I'll be honest and say that I haven't tested this recently. So there might be some minor issues to resolve there. But last I checked it was working. So hopefully it is still the routing section here then allows us to from a, a mediation client perspective view which of our connections have mediation associated with them which of our keys have we sent off to a mediator for purposes. Oh, this is where we would actually request mediation from a connection. I misspoke earlier. Apologies for that. Mediation is a pretty detailed, I'll say, and nuanced interaction as well. We had a DCO have done trainings in the past on mediation before we might have done a couple meetup events that might have recordings out there, not sure. Let's go back and see if there's anything that we can share potentially there. There's also some documentation on mediation within the areas cloud agent Python repository as well, or you can read up a little bit more and what capabilities are present in which capabilities are not. Let's see which one down. So next one down is so transaction ID equals credential definition ID. Let's go back to the end scan and take a look here. So in this case. I'm not sure I'm, I'm 100% certain on this one. Lynn, do you think you could step in and help clarify this one so is the transaction ID is the same as the credit ID is that a universally the case thing or is it possible for these two values to differ. That is a good question. I am not 100% sure but that's what it looks like to me right now I can look that up and get back to you. So, yeah, that's, that's my understanding as well. The transaction and the credential definition itself are closely tied as identifiers for that transaction. So, that appears to be the case but I can't say that I've got that knowledge in the forefront of my mind to say for sure. Let's see other questions can you give us some info about the mediator. I might come back to that. I did allude to some more resources there hopefully that is a good location to get some more info there. Can you walk through the parameters in the end scan please. I'm again not the expert on indie scan so I'll let Lynn correct me at any point here, but these filters at the top here corresponds to the transaction types that are possible on an indie network. So NIMM corresponds to the writing of a DID to the indie network NIMM is kind of an antiquated term originally meant was short for very NIMM. So hyper ledger indie as a decentralized identity solution that actually predates the W3C did specification uses some different terminology in a few different places. So NIMM is DID writes a trip are more or less arbitrary metadata that you can associate with a NIMM. This was a precursor to the document. Again, given the historical background there. Again, as it predated the did spec schema of course corresponds to schema transactions claim death. This again. Bit of a historical context thing here to claims used to be referred to, or excuse me credentials used to be referred to as claims. That was even persistent in the W3C space for a while before they switch to the term credential. The entry definition is a definition of our application registry this contains the remote URL for the tail server that is responsible for hosting tails files for recipients of revocable credentials. This is again kind of another can of worms that we don't really have time to get into today. So I'll leave it there for now revocation registry entry. I believe I use the term delta to refer to this in a chat message previously. This is an update to a tails file that is obtainable from a remote tail server. So by taking the original tails file and replaying all the revocation registry entries, we get an updated list of what credential identifiers are within the set and whether they've been revoked or not. And that kind of gets into the weeds a little bit on revocation. Some more details can be found in the handout for links to info there set context I'm not actually sure what that one is. So I couldn't tell you. I'm actually sure what that one is either but I will add a little bit here. So in the course of our training today, the updates to indie scan have been as much as 15 minutes behind. And the reason for that is because indie scan has some settings in it that make it so that it, it doesn't cause. A lot of traffic on the ledger it just pulls a couple of transactions at a time and so it's been regularly updating throughout the course of people adding a bunch on but it does get behind I think it's about four minutes behind right now just kind of gradually updates itself and just a few transactions at a time. So apologies for it not being so quick but that's because it doesn't because we don't want it to have an impact on the response time of the ledger so Cool. Thanks Lynn. There's been a little bit of back and forth in the chat. I think Sam's done a good job of addressing that one and again I think we're at a point where we should just be linking to some more resources for you as we wrap up here. On the topic of mediators and DCO does run a public mediator that you can use for development purposes, kind of as a, you know, reduce barrier to entry for developing a mobile agent. I will obtain a link to that to that and drop it in the chat while we proceed on to our next section. So I'll turn it back to char who's going to wrap us up. And then I think it'll go to Sam, or maybe I'm going directly to Sam one of the two. I apologize that I jumped the queue. I think you're good to go go ahead. Okay. So the for the last section here we want to talk about community engagement and how to how to get involved in the community or both receive help and contribute effort to the, to the things that are available and so that's what that's what I'm talking about in this final segment and I realized that we're within 20 minutes at the end of the training. I will try to make this useful and brief as part of that. I'm talking about this, because I've got a bunch of experience doing this. Daniel and I were both around before Aries was Aries. And so we've been involved there. I'm still a co chair of the hyper ledger Aries working group here within hyper ledger. I'm also the co chair of the diff did come working group and I'll talk that about that in a minute. This right here is the avatar that you'll see on all the places and that and I'm telegram salmon most of them. And so if you see me on GitHub, or on Twitter or something else, please reach out. I'm an architect for and so I'm involved in the community I'm involved with lots of customers and helping guide them to appropriately apply the solutions that this space offers. The reasons to engage. There's a lot of there's a lot of reasons to engage and the, I'm not going to read the slide here. But the, the point that I want to to mention is that there's various levels of engagement here. You can jump into help you can jump in. There's lots of people who who joined the community to kind of lurk and listen. And sometimes it's a long time or never before they, they either, you know, mentioned something in a meeting or comment on an issue and that's okay. There's any level of engagement that you're comfortable with is the level of engagement that will work better for you, but engaging with the community is a great way to to learn it to help and support. And there's a handful of different organizations that relate to areas. The most obvious one of course is the hyper ledger foundation that's where the areas project is. We've also talked a lot about Indie and we've engaged that in our examples today. Aries works with lots of ledgers and strives to work with even more. And he is one of them and that's sort of historically what Aries was born out of the other project to mention here is hyper ledger Ursa, which also came out of Indie is a cryptography library that Aries and Indie both use, but is also spun out and available as a separate project. Hyper ledger has been a fantastic place for these projects to live and has has been amazing in their support of the efforts that have have gone into these projects so very grateful there. Another organization to be aware of is the decentralized identity foundation. They, they support a number of efforts that did come working group is the one that I am the most involved with. In the areas community we developed what came to be known as did calm v one, and there was enough interest from outside of the Aries project that we can. There's a lot of interest in other folks that wanted to be engaged with did come and so we are have been working for the last couple of years for the next version of did calm did calm v2 over there, and expect this to be a great year for the progress and finalization of that spec itself. It's very similar, but some important changes that both improve the protocol generally and make it more usable by a larger community. There's other working groups there there's an identify identifiers and discovery working group and a claims and credentials those those all sound like very similar things. A side tree is a technology that was that was organized there. And this can all be founded identity foundation is where their site is and where you can find additional information. These working groups all have regular meetings and slides and all sorts of stuff like that. The decentralized identity foundation does have a different membership model than hyper ledger but, but for small companies or individuals and there's a streamlined process, which makes it easy to get involved. Another useful organization in the space is is the trust over IP foundation. Here is their logo and they're in their, their uri. They've been done some really cool things they focus on governance, and one of the thing projects they've been involved with is the good health pass initiative. Linked here and we'll have these slides later is they have a really cool layer model of the trust layer that showed up in an earlier slide at the very beginning of our training today. And so that's worth exploring as well. They do a bunch of good work in that area. The W3C communities credentials group. And Can you see my slides. Sorry. Yep. Okay. So the W3C CCG is what it's commonly called. Oh, we've got some folks that can see them and some folks that can't. I apologize. I don't know what zoom is doing. I hope we can find a resolution to that. I can see. Okay, the, the CCG has been involved in two important things one is the W3C credential format. And the other one is the is the did core spec that of course we rely on. So the CCG is available there as well. So these are the communities that you may want to get involved with or that relate peripherally to the areas work and and where you can find them. So when you engage with a with the community often a bunch of the work is done in working groups, working groups communicate typically via some sort of chat like Slack or rocket chat. There's always an email list, and there are regular working group calls that you can join or listen to the recordings for the calls within. This is very true of hyper ledger and also true of most other organizations. There's an antitrust policy and a code of conduct. I don't mean to imply that they only apply to the calls they apply to all of the interactions of the working group. The groups are formed of people typically that represent companies, although sometimes individuals that are in competition with each other, and, and find that that it is worth engaging in and standardizing and working together on some aspects of their business so we call that co So here's the basic rules we don't we don't discuss business deals we don't discuss any sort of pricing in those calls and just focus on the technology projects itself. The code of conduct is also very important. Bill and Ted said it really well we need to be excellent to each other. There's the code of conduct also contains policies and procedures to resolve any issues that do come up. And I'll have more to say about that in a minute. There are very common ways of doing that they almost always have meeting agendas topics are being the, the chair of several of these working groups topics are almost always welcome. Even if they're, even if they can't be put on the schedule for the next week or that they'll, they'll almost always accepted meetings have different ways of sort of volunteering sometimes the zoom hands feature or just speaking up as good. Sometimes with larger meetings you need some more formal interactions and so sometimes they use the chat as a formal queue. So you can sort of get in line to speak. And it's a great way to ask a comment if you don't understand like how you, you know, ask permission to speak up then dropping a note in the chat is an easy way to do that during the live meeting. The calendars are usually published by the organizations that host them so hyper ledger has a calendar for example, and the diff also has a calendar. It's usually found in the repositories associated with the working groups and or associated wikis or on the agenda, and a little bit of googling turns these up pretty fast. The meetings are often recorded which is super handy when you can't be available there. It's also really handy when you want to watch, if you watch in a player. The C is one that usually will play the format that they're recorded in, and you can watch them in faster than real time which is helpful to when you need to catch up on meetings. They're usually posted if you don't see where they are then go ahead and ask, and they will help you with that. GitHub repositories are very commonly involved whether they're code or whether there's other sorts of documents that are created. There's almost always a read me that contains useful information. It's also worth being familiar with the issues in the repository active pull requests. And you can also of course browse the commit history because it is a GitHub repository to see kind of what kinds of actions and who in the community is involved. You can contribute your code. There's a couple of quick tips here you need to mind the code license that is in the repository to make sure that you understand it and that you are capable of committing that you typically work on a fork of the repository, and then submit a pull request for that. And the developer certificate of origin DCO, you can Google that up for specifics as well as commonly required for these repositories. And that helps with some legal issues surrounding the code. This is, this is important. And I want to spend a minute on this. We need you in the community. We realize that everyone is busy but if there's time or this lines up with with the goals of your employer or personally, we would welcome your contribution here. I typically invite you if everyone in the meeting that you join seems to come from a different background than you do. We need your perspective we need your experience. We need you to help us understand things better and produce a better project. Please be persistent with your contributions. Sometimes there's rules that are that are a little bit frustrating to, you know about different types of reviews that are required or other things, but please be persistent with that. The general request here is that we're all human. You know the this is the focus is on software but we're all humans making it happen and we all have different stresses in life and not everyone is in a perfect place all the time. If someone is a little bit short with you or or a little bit impatient or something then then please be a little bit patient with them and the stresses that they have. But here's the big but don't put up with bad behavior. If there are, if there are things that are unacceptable particularly because of the code of conduct of the community. That's not the type of thing that we that we ought to put up with. If there are things happening in meetings or comments being made or or other sorts of things that make you feel unwelcome. Please, if I beg you, please engage and try to help us solve those issues. Before you leave, obviously this is fully voluntary. These sorts of issues I seem to be relatively rare, and I'm glad for that. But I want to make sure that anytime something does actually come up that we improve our community rather than then drive people away. That's what we do. And we hope that you will find the time professionally or personally to engage in the in the products there are the projects that are important that you're interacting with the community happens because of volunteers and and makes that work forward and that's how we have great things like areas as a project and India as a project and the all these other efforts that are happening in various organizations. I hope that came across as a quick guideline and a plea to be involved. We are grateful that you're interested in this training at all and that we're here. Again, this is me. I didn't. This is a slightly better updated screen here. If I can help you with anything as it relates to the community or anything that we've talked about today. Please do reach out. If you have any email address at in DCO, or you can reach out on on any of the other platforms that I'm engaged in the same goes for all of the in DCO team, if there's a way that we can help point you to resources, help you get past something or know who to ask, please do reach out and we would love to help you. We've heard from a bunch of really knowledgeable people today. And with, with all the things they presented a Daniel and char took a huge chunk of the technical presentation, Linda to fantastic job with the networks and Scott with the introduction and the terms and the overview and the orientation to the concept and so we are all available. James has his email that he's just reached in the chat and you can reach all of us. We're glad you came. And we're and we hope that you learned stuff today and that this was useful. And, and we're glad that glad that you are here.