 Somebody say something I was just going to say thanks to everybody who's dialing in for joining. We're going to wait a couple more minutes and start but if people have questions now feel free to ask. All right, strange working. Great, great. Awesome. Thanks for doing that Sean. And people are probably familiar with this but there's a few different ways to ask questions you can come off mute you can use the raise hand feature. There's chat we are encouraging you to use the Indie chat channel, instead of the zoom chat channel and that's because after this workshop is over that Indie chat channel is the place where you can go to stay in touch with the community and ask questions you know the zoom chat channel will go away when the workshop is over so we do encourage you to use the Indie channel. All right I'm thinking we should maybe wait another minute or two is some people come in last moment. Yeah, I'll go ahead and hit my record, Sean and David but yeah maybe in a minute or two we can go ahead and kick off. How does that sound confirmed that sounds good. Looks like we have a really good group coming in. This is exciting. All right, I think we're ready to go. Well first, Sean or David would you do you have any words if not we're, we're good to go. No, no I'm set. Yeah, feel free to kick off. Okay, awesome. Well hello everyone, and welcome to today's workshop of hyper ledger Indie deep dive. My name is James Schulte, and I'm Vice President of Business Development here in DC, and we're very excited to host today's session partnership with hyper ledger foundation so a big thank you to the hyper ledger team. The work that the open source community is doing with the hyper ledger Indie is playing an important role in the transition from traditional identity management systems towards decentralized identity models. The hyper ledger Indie has become a major force and driving the adoption of decentralized identity, or oftentimes called self sovereign identity or SSI. The hyper ledger Indie project and its associated code bases are being used for various production level decentralized identity solutions around the world tackling complex business challenges. Hyper ledger Indie is a distributed ledger protocol purpose built for decentralized identity applications. It's comprised of tools, libraries and reusable components for creating and using independent digital identities rooted on blockchains or other distributed ledgers so that they're interoperable across domains and applications. And today we're lucky to be joined by some of the world's leading experts, as well as contributors to the hyper ledger project. And they'll be giving you an in depth look into these tools and libraries will show you how this technology works and what you can do with it. And some of our objectives today include advancing your skills related to coding with hyper ledger Indie node. Familiarizing yourself with the cryptographic libraries contained in hyper ledger Ursa to get an overview of platinum code, exploring the various tools and libraries that support Indie. And above all we want you to be ready to build your own solutions and and hopefully contribute to the hyper ledger Indie project going forward. In the workshop today if you have any questions or need help. We asked you take those questions to the hashtag Indie channel and the hyper ledger rocket chat, and we will be periodically dropping link to that in the zoom channel here. In the prerequisites that you were provided when you registered for this course, you should have gotten instructions on joining the Indie channel. If you haven't done that yet no worries go ahead and use that link and you should be able to get on that pretty quickly. You'll need a hyper ledger or sorry Linux foundation ID, but it's a fairly quick process. The reason we ask you to direct your questions to the rocket chat channel so that these questions and answers that are given today can act as a resource long after this workshop. For some reason you won't be able to join the Indie channel. Feel free to drop them in zoom and we'll do our best to get to those questions but we have members of the Indie CO team who will be monitoring, mostly the Indie channel but we'll, we'll do the best to handle that we can. So provided on screen here we have a link to the rocket chat channel as well as a handout that will be used during the hands on portion during the workshop today. So feel free to use these QR codes to access those. Feel free to screenshot, we will also be dropping a link to the handout in the chat as well. So yeah as mentioned prior to today's workshop you should have been provided a set of prerequisites. We haven't done those no worries. We're had we're still happy to have you feel free to follow along and observe today. Recording it today's workshop, along with the instructional handout and associated leaks will be posted on hyper ledger foundations workshop page to be viewed at at your convenience. And so for today's agenda. We will first hear from Sam current. We'll talk about community efforts and how to get engaged and contribute. We will get an overview of the Indie stack by Adam Burdette, after which he will discuss the Ursa cryptographic library. Daniel and Lynn will then walk us through the Indie SDK. And after that we'll have a five minute, five minute intermission. So stand up, get a drink, walk around, then we'll come back and Lynn Ben Dixon will provide instruction on network management. And then Adam will give an introduction to quantum code. We'll have another five minute break. And then Adam will lead a hands on portion regarding Indie node code. And then we will conclude with Sam and Daniel discussing the future of hyper ledger indeed. So again we're very excited to have you here today we hope that this is a very valuable time for you and helps you in all the work that you're doing. And with that I will go ahead and hand off to Sam Kern to discuss community. Thanks James. We're glad that everyone is here. I'm going to take a couple of minutes and talk about engaging with the community. And this may be old hat and very familiar to many of you. If it's helpful to some though then it's, it's worth covering to help us along and to help you be familiar with how to engage with the community and get involved. I'm Sam Kern. I'm an architect at DCO. I am telegram Sam on all the things. And I'm a co chair of two different working groups within the, the open source community here in hyper ledger. I'm a co chair of the areas working group, and at the decentralized identity foundation I'm a co chair of the, of the dead come working group there. So, a bunch of experience engaging with the community and organizing and all sorts of things and so I hope that what I share here is useful to you. So, why engage with the community, right. We, the community is made by the people who join it. And, and, and we need you both for your expertise with the project but also in life and circumstances, and, and, and, and the insights that you can provide. That this is not just for developers we love developers of course, but there's often work in user experience and documentation and project management and other types of things that most open source projects can benefit from. So if you're not a developer, you're still very much wanted and needed. So three main projects here that that we work with at the hyper ledger foundation. We're here today of course to talk about Indie. And out of Indie, there are two separate projects that were born out of that to make them a little more general and approachable. One is of course hyper ledger areas, the agent side of things as opposed to the ledger side covered by Indie and hyper ledger Ursa, which is a cryptographic library that contains lots of the cryptographic primitives for useful for all sorts of this stuff. And so those are the groups that are here. There are other groups besides hyper ledger that are involved and we wanted to highlight a few of those and what they do the decentralized identity foundation. Does work. The did come working group is there for did come be to the interoperability working group is often has really engaging discussions about various topics. And there are other groups as well, working on identifiers discovery and claims, etc. So a good group over there. Trust over IP is another such group. And I want to draw some attention to the utility foundry working group of which our own Lynn Ben Dixon is a is a co chair. And so they do a lot of good stuff. They also have a really cool layer model that they've developed that helps sort of explain how the different pieces fit in different places and so they do a bunch of good work over there. W3C community credentials group has worked on the did course back and also the W3C credential format and ongoing work in those efforts. And so that's a quick survey of the of the various types of groups that get involved in in the open source community. I want to talk a little bit about how to get involved to help you be a little familiar if you want to dive in and and join any of these working groups they're a little different, but the stuff that I'm covering today should apply generally to most of them. Working groups generally communicate via chat is very common today. The journalists are still active and carry some really engaging conversations, and then most working groups have a regular call of some kind that you can either join or listen to later. Calls are generally where lots of active discussion happens sometimes it's faster to hash things out on a call then via an email exchanger and chat. There's particularly in hyper ledger but in most groups there's a there's a handful of policies that apply. So hyper ledger has an antitrust policy and as well as a code of conduct, these things shouldn't be surprises but abiding them helps to to keep the work that the group does free of issues either any sorts of issues that help someone feel welcome or discourage them from participating in any legal issues that that may come up. So, please be aware of the policies, lots of groups call them out, or provide links to those and you can you can be familiar with them. And if there's a I'll talk a little bit later, but if there's any issues with these, or something that is not being followed in the policy then then that's worth bringing up and trying to find a resolution to those have meeting agendas topics are almost always welcome sometimes that the next few calls are already scheduled in advance or there's invited speakers. But if you want to talk about something or present on something, then that's almost always welcome. Different calls have different forms of interaction. If you want to say something in the call sometimes you can just unmute and say something typically in a smaller meeting larger meetings often use a more mechanism like the raise hands feature in in in zoom or they'll have a queue that they managed with the chat. If you speak up and ask like hey how do I manage the queue in the chat then someone will will always someone will will clue you into what the mode is used in a particular meeting chats almost always active and it's great way to ask a question without interrupting the regular flow of calendars are the place to find the meetings, and they're generally run by the organizations that that have the meetings and so the hyper ledger for example has a calendar with meetings on it that you can find all sorts of things to join. And then links can often be found in the associated repository. There's often agendas shared in advance, sometimes on a wiki. And so, and so the calendar should be easy to find, so that you can add it to your calendar and make it on a regular basis. Meetings are often recorded which is super handy for the times that you can't be present or you have a conflict. One of the perks of watching is that if you if you pick a player that has this feature and vlc is one of those you can, you can play back the meeting and faster than real time. And so that's that allows you to get caught up on meetings that you may have missed recordings are usually posted on the agenda or in another regular place. The diff for example has a channel that all the recordings are posted into. But if you ask them people can direct you to, to where those handouts are sorry to where they, the meetings are posted. Most groups have a GitHub repository that they're working on given that this is open source not all the repositories are code, but some are used for documentation or other sorts of things, but there's usually one. There's a read me file usually in the root with some meaning info, and sometimes links to rendered copies of the repository contents which is very helpful. It's worth getting familiar with the various issues and and to contribute to the resolution of those issues or commentary there. You can view pull requests before they're committed which will help understand the changes that are being proposed to the group's work. And you can also through GitHub you can you can look at the commit history and see what changes have happened recently even if you haven't been to a repository in a while. And getting familiar with GitHub will help you understand sort of the repository of of both code and documentation that most groups interact with. If you want to contribute your code which is an obvious goal, or I say code here, this also includes documentation changes, etc. You want to mind the code license and the requirements for working on it. And make sure that those are are compatible with the work that you're doing. Most of the work you want to do typically on a repository fork. So you would, you would fork the repository and then create a branch within that fork for your contribution. The developer certificate of origin is usually required on on these projects on commits. If you Google that up or ask around people will help you get that going it's not super technical to do. And it's part of the legal issues around contributing code. Then after you've got your, your change made to your fork you want to push it to to your good GitHub fork, and then open a PR to the main branch on the original repository and that makes it easier to update or adjust your PR so that you can so that if there's any changes suggest about a community and discussion it's easy to make pushing code back is how the whole open source code based works and so that's a that's an easy thing to do. One thing that you might you might do to help out is look for an easy change that you can make if you're if you're trying to be familiar with the process look for a spelling error or some formatting. Some format and it can be done. And then that will help you get familiar with that process as you, as you, as you, you know, get more deeply involved in more complicated things. The last thing that that I want to talk very specifically here about is is engaging in the community. We need you and we need you to speak up and share with us what you know and the experience you have and who you are. We particularly need that if you don't feel like you fit in. We unfortunately sometimes cultural things being what they are have far too much representation in some areas of life than others. And, and I'm not just speaking about the fact that we tend to have a lot more developers than we have you extra designers or other things like that. Please be persistent with your kind of contributions in patient with others. We've all got stresses in life and we don't all handle things perfectly in the moment. And, and encourage you to try and stay and work through any issues that you might encounter. That is not an encouragement to put up with bad behavior. If there is bad behavior happening of any kind in groups, please engage the code of conduct and seek help seek help from the leaders in the community and others around you to help resolve those issues so that the real work can go forward. We need you and I hope that you will consider jumping in to to the community and whatever form you think would be useful or helpful to you. And, and we hope that that turns out to be a great, a great thing for you and helps build the communities in general. So that wraps up my, my brief introduction to communities. With that, our next speaker is Adam Burdette, and I will have, I will go ahead and hang out on the, on the rocket chat if you have questions about community, and I will be happy to answer any questions there or help you get started with stuff. Thanks a bunch. Thank you, Sam. Just, I'm, you can see my slide right. We can. Okay, great. Not that that has ever been wrong for me in the past. I'm Adam Burdette. I'm a software engineer for in DCO. I've been in the Indie Aries community for three years or so. And some of my original contributions have been in actually in the node. I helped with the rich schemas and the developments around those protocols and code. So I've helped bring the transaction author and endorse endorse TAA. Trans transaction author agreement part of me into being getting it anchored on to the ledgers. My recent contributions have been in Aries framework JavaScript. I've helped work out mediation in that repo. And I more recently have been working with Aries cloud agent Python ACAPI. I'm a plugin developer for that. And if you guys have seen the universal resolver, I was one of the main developers on that project. So enough about me that I guess to accompany this session, there's a handout inside the handout. There's actually instructions that we'll use in the hands on portion. So it's, it's kind of important for later to look at that. And also to make sure that you've gone through the pre rec setup. And as you have questions, please, please go to rocket chat and ask them. And my colleagues have been instructed to catch me in my spot in the slides if I missed something or if there's a big question asked. All right, it's my pleasure to introduce you to Indy. Now James did a really good job introducing Indy and what it does. I want to make sure that we have this mental model that Indy is an identity management platform device. That's decentralized and allows credential holders to interact with verifiers without gatekeepers. And so that's like, that's kind of like the overarching goal we're striving for problem to solve. So in this session, we're going to talk about the main repos and elements of Indy stack, which are Ursa, LibIndy, the rappers of LibIndy, Plenum, and Node, IndyPlenum. I often will abbreviate Hyperledger IndyPlenum and shorten it to just Plenum. So apologies for that. Ursa, of course, is crypto. We'll be going over that right next after our overview. Ursa providing the crypto has several different libraries that consume it and provide things like non creds. And there's language rappers around those Indy SDK, VDR, CLI, those all are categorized in the rappers. And then the Aries agents build upon that. Ursa also has a Python interface that Plenum consumes and uses. BLS signatures is one of the main and IndyNode extends Plenum's base functionality to provide our decentralized ledger. Now, this is a complementary to the other session we've done where we went into the details of Aries agents and how those interact and did come. This session is going to be deep in the ledger side of things with IndyNode, Plenum, consensus, how the data is persisted, and some of the elements of the crypto that build upon that. This is a great time for questions if anyone needs to interject, since I'm not watching the chats. I hope that's a good sign. We're going to talk about Ursa. I'm not a cryptographer but we need to talk about this repo, and it's important to know what exists and what it provides, supports. Ursa was created because people in hyperledger community realized that it would save time, effort, and improve our security if we all collaborated on our cryptographic code. Since cryptographic APIs are relatively straightforward to define, it would be possible for many different projects to utilize the same code without too much difficulty. Ursa features Lib Ursa and Xemix, Lib Xemix. Ursa provides signatures, key agreements, and symmetrical encryption. We're going to go into some of those types that are supported. We have ECDSA making use of SICKP256K1 keys. We also have the ED25519 elliptic curve. These are the types most commonly used in Indy for the keys associated with DIDS. We have BLS signatures for pairing friendly curves. This is used for multi-sig on ledger transactions in Indy. We have the CLL signatures, which use RSA, which are the basis for non-creds. And we also have SMIR secret sharing. Lib Ursa supports key agreements using ECDH, the same SICKP256K1 and the X25519 elliptic curve. Lib Ursa also supports AES symmetric encryption and Cha Cha 20, which is the primary encryption algorithm used in DITCOM v1. Lib Ursa exposes a C-callable interface. The C-callable interface allows any language that supports foreign function interfacing to be able to wrap Ursa and use it natively in that language. It also exposes a Rust native interface as it is implemented in Rust. So Rust developers can consume the crates and use it natively as well. That big reason for that is the security features and benefits that Rust provide. There's different targets you can build Ursa for, mobile. One of the ones notable to call out is Wazem WebAssembly. So if you want to go and play with Ursa in the browser, there's actually the possibility of that. Z-Mix, Lib Z-Mix. It's a generic zero-knowledge proof library. It provides bullet proofs, range proofs, set membership. It builds upon Ursa and is some of the next level iterations inside of Ursa. One of the reasons I actually call out Z-Mix, because I don't know very many libraries or projects using it yet, but it sports BBS signatures, something that Aries community is starting to talk about incorporating into their projects. It also supports PS signatures, G signatures. I'm absolutely terrible with names. Any of my kids will tell you, so that's why I'm not trying to pronounce point-shovel Saunders. I believe we've covered the main point with Z-Mix is the BBS plus. So when people talk about that and the support, it exists in Ursa inside the Z-Mix. And so, as I mentioned before in the intro, Lib Indy consumes Ursa directly, creating a non-creds, our zero-knowledge proof credential capabilities features. And Ursa Python is a language wrapper for Ursa that Indy Plenum uses for BLS signatures. So we'll continue to build upon this mental model we're starting with. That actually concludes my intro and Ursa. And I believe Daniel's up next. Yep. So I will be talking about the Indy SDK. So this is, as Adam said, there we go. This is the next layer up in the stack. So to briefly introduce myself, I am Daniel Bloom. I'm a software engineer at Indy SEO. You might remember my name if you attended the Aries workshop a couple weeks back. That's definitely more my area of expertise. I spend more time in Aries than I do Indy. But since within Aries we consume Indy, I've had a lot of experience working with the Indy SDK especially. I'm more recently getting more involved on the larger side of things within Indy. Cool. So as I said, we are going to be talking about the next layers up in the stack, which builds on top of Ursa. And this is broadly termed the Indy SDK, but this contains Live Indy itself, as well as the language wrappers that utilize Live Indy and expose the functionality of Live Indy to Aries agents built on top of them. I'll go through a quick overview of the SDK project as a whole. I'm also going to talk a little bit about the architecture of the Indy SDK. So if you want to get involved and start writing code, you have kind of your foot in the door in terms of understanding the project as it stands today. I will also talk about what we see coming down the line for the Indy SDK. So what we see the next evolution of the Indy SDK to be. And then Lynn will follow me with a brief discussion on the Indy CLI. So in the Indy SDK, if you were to pull up the project repository right now and take a look at the folders that are in there, you'll see at least these three, there are several more, but these are the three I wanted to really focus on and call out. So first off, Live Indy. And this is the actual library, the C callable Rust library. That has most of the interesting stuff of the Indy SDK. The library is then consumed by wrappers, which are language specific wrappers around that C callable library. We have wrappers for .NET, Java, iOS, Node.js, Python and Rust. Which is a pretty broad coverage, I would say, for most languages that people are using. I know there are others that are outside of the Indy SDK repo, which tends to extend wrappers within the repo to expose functionality. For example, React Native I think has some more wrappers for Indy in various projects out there. So to elaborate a little bit more on these wrappers, if you were to go and actually look at one of these wrappers, let's say Python. And even though it's the main point of interaction that your application will have with the Live Indy library, you are actually not going to find too much within those wrappers. It's much just taking the idiomatic Python call and transferring or converting that to what the C callable interface is expecting and then pushing it down the C callable interface and then waiting for the response and then in a similar process wrapping the response in an idiomatic way for the language itself. And usually if you're having problems, you're going to have to look deeper than the wrapper to find where that problem is originating from. And then finally, the CLI is a tool for interacting with Indy wallets and ledgers from the command line, and then we'll be getting a little bit more into that. So just a teaser there for now. It strikes me that I didn't actually give a broad definition of the Indy SDK, so I'll back up a little bit and say that now. So the Indy SDK is the main place to go for developing applications on top of Indy. And I might self-explanatory given it's an SDK. But this is how we interact with the Indy ledger. This is how we also create and interact with a non-credits credentials using the Indy libraries. So to talk a little bit more about what is included inside of the Indy SDK. In my mind at least, I always kind of split things up into these main three feature buckets. This is not literal. You won't necessarily go into the Indy SDK repo and see things split up like this. In fact, there will be things called ledger that aren't going to be exactly what I'm describing here. So this is mostly conceptual, just rough groupings of functionality. So ledger being able to communicate with the ledger and all the various interfaces needed for that. Wallet being the secure storage aspect of the Indy SDK. So how we hold on to private keys in a way that protects them and helps prevent misuse of those private keys. And then a non-credits. So being able to issue and verify a non-credits credentials. The wallet, especially in my mind, bears a little bit more discussion here in terms of broadly what it is doing. So the Indy SDK wallet has been described as a software secure enclave. So it is the point at which keys are created and stored. And then it also serves as a hub for crypto operations, which need the private keys stored within the wallet. And this allows us to do really useful things like making sure that memory is zeroed out where a private key was previously stored without having to rely on the layers above the Indy SDK stack where it's often more difficult to zero out memory to perform that. So the Indy SDK wallet helps us to keep private stuff private. More broadly, it's an interface for storing and recalling encrypted data to and from a database. So it's a layer in between the thing that needs to store stuff or recall stuff from the store and the actual database back in. Which means that the back end is actually interchangeable. Currently it supports SQLite and Postgres. SQLite is used most frequently for local deployments. You just need to spin up an agent locally real fast or the Indy CLI will also use the SQLite database. And then Postgres is most frequently what we see in production deployments of software that are using the Indy SDK wallet. I'll also call out that because of the wallet interface. This means that the back end itself does not need to provide any encryption. The wallet is handling all of that for us. So what that also means is that if you were to go and inspect these database tables, you'd be able to see values in that table, but they'd be totally opaque. There's not really much you can glean from those stored values since they're all encrypted. So these are the primary three feature buckets and each of these in somewhere another will call down to crypto layers. Ursa is especially used for that as Adam just described. And the sodium is also used to some smaller degree for things like authenticated encryption and such. So from a very high level this is more or less the shape of the Indy SDK. So what I'm going to do now is start drilling down into some of those feature buckets a little bit more and describing the actual components that you'll see within the Indy SDK repository. So starting off with the pool here. So the pool is more or less our entry point for all other ledger for all other ledger operations. So before we can do something with the pool we need to first open up a handle to a pool so can configure a connection to that pool and then pass it into the next operation that needs to do something with it. I think Lynn will get a little bit more into the terminology of a pool, but it's more or less just a grouping of nodes on an Indy network that are listening for transactions from clients. So the pool API within the Indy SDK helps us to open a pool handle, close pool handles and store and delete configuration for those pools I believe as well. In a very similar fashion the wallet kind of serves as our entry point for nearly everything else within the Indy SDK. So before we can, you know, do crypto operations we typically need access to private keys or to create private keys, and all of that takes place within the wallet. So naturally we first need to open a wallet to perform any of those operations. So this wallet API to clarify is not where the storage is actually occurring. If you were to go and look at the wallet module within the Indy SDK. This is instead where we go to manage the wallet so creation, open and close, delete, import, export. So the full wallet lifecycle I suppose is represented by this API. So we've got the ledger API. And this is where we actually begin to interact with the ledger using a pool handle that was previously created with the pool API. So the ledger module contains methods for signing and submitting a transaction to a pool. More or less taking an already constructed transaction and then preparing it for with the proper authorization signatures and all that before pushing it to the Indy network. This module also contains builders for the various transactions on an Indy network, such as pool, or excuse me, NIM, pool management, non-credits objects, and we'll get into a little bit more detail on those different transactions as we talk about Indy node and plenum and such. Within the ledger API, we also have parsers for transaction responses. So when we submit a transaction to the ledger, the returning value is not binary, it's not an unreadable format, it's JSON formatted, in fact. So what these parsers are actually doing is transforming the ledger response into the format that is expected by other SDK APIs. For example, if we pull a credential definition from the Indy network, the transaction response contains things like the state proof and other transaction metadata. And these are pieces that are not necessarily relevant when we're interacting with the credential definition from the non-credits API. The parser for the creddeaf will transform that ledger response into the expected object, which we then pass to a non-credits operations. DIDs. So this is the API within the Indy SDK, which allows us to create and store DIDs. And also the ability to store other people's DIDs. So if we receive a DID in the process of establishing a connection with them, this API also allows us to store their quote unquote DID. I'll briefly call out however, however, that this tends to be a less frequently used feature, the their DID portion of it, and a different portion of the wallet, the non-secrets, which I'll talk about in just a minute, is more frequently used for this sort of metadata storage. Using the DID API, we can also perform key rotation of the keys associated with the DID. So of course very important when keys for a given DID are compromised, and then other APIs within the SDK allow us to publish that key rotation to the ledger if it's a public DID, or we would then engage in a protocol with our connection to inform them of the key rotation otherwise. The DID API also contains some helpers for storing and retrieving metadata about a DID. For instance, if you'd like to keep track of which DIDs have been posted to a ledger, you could decorate them with this metadata and store it in the wallet for later recall. And then there are of course a few other miscellaneous helpers available within the DID API for interacting with these DIDs. Okay, so the crypto API. This is more or less just crypto operations that didn't fit into another in the SDK function. So things that aren't strictly related to DIDs or non-creds ended up in this portion of the API. So using this API, we can create a key that is not associated with a DID. We just need a bare key. It also allows us to sign and verify arbitrary data or messages. And then did come message packing and unpacking is also contained within this API. Both of these the sign and verify and did come message packing are of course very frequently used within the agent ecosystem built on top of the Indie ecosystem. So this is the API I was alluding to a moment ago. So I've talked about the wallet as the place for storing private key material so far. However, as it turns out that same mechanism for storing private stuff for keys is also useful in general for just storing information that we would like to keep secure or private. So we can store arbitrary data in the database and have a few mechanisms for easily retrieving that data through the use of tags. But this allows us to do things like storing did come connection metadata as arbitrary records within a secure data store. I will call out that the non secrets API is not necessarily something that we should be using for just all application data should still really be just for information that is sensitive, such as the I'll take a concrete example just to speak a little bit more precisely and areas cloud agent Python as we're engaging in a credential issuance protocol. The credential issuance records that we're using to keep track of that protocol will contain for a time at least the information that is actually issued in the credential, which is considered to be generally speaking personally identifying information about the recipient of the credential. So that would naturally be something that we would want to store securely and certainly never store in plain text anywhere. So the non secrets API helps us to do that. This is also an interesting point here, we can store crypto primitives that are not natively supported by Andy within the non secrets portion of the API. We won't get the nice, you know, during memory and all the other key hygiene stuff that Andy provides for the native key types, but it is still, you know, being able to store it securely is better than the alternative. So it is also a possibility to store these crypto primitives within the SDK non secrets part of the wall. And then the last API I'll call out here for now is the non credits API. So if you participated in our workshop a couple weeks back, we went pretty in depth on the process of issuing and verifying a credential using areas. And I will encourage you to review that workshop or if you were participating, I guess, for more details on like the nuts and bolts of an on credits and issuance and verification and such. But the Indie SDK API is ultimately what exposes the the primitives that we need to do that within the areas ecosystem for non credits. So schema creation. This is the actual values that will ultimately be published to the ledger. So the non credits API does the local creation, and then we pass it off to the ledger API for the actual process of submitting to the ledger credential definition. This one's a little bit more interesting than schemas because in order to create a credential definition you actually need to create a set of keys for each attribute that will be contained within the credentials issued from that definition. All those private key material and the public key identifiers for them are stored within the wallet through the non credits API. Revocation registry operations are also enacted here. And then the process of creating the credential itself which is ultimately issued to a holder is also done to this API. As well as presentation and verification of those credentials. So this is a diagram pulled directly from the Indie SDK documentation. Which is found within the repo if you navigate there there's a doc section. This is an overview of the architecture itself. So less abstractly as I've been talking about it and more what you'll actually see in the repo in the rest of the library. So we've talked about the wrapper layer. It just calls the C callable API and processes the responses and turns them into an idiomatic response for that language. The C callable API is then the first rust code on the other side of that foreign function interface. This receives the data from the wrapper and begins the process of marshaling that data to the native representation for the live in the library. The API layer then calls down to a command layer, which aggregates functionality from the different services provided in the service layer, and is ultimately what provides the specific functions that are performed when we call. One of the APIs from a wrapper. So this is following the command design pattern if you're familiar with that. It is quite thoroughly executed according to that design pattern so if knowledge of that design pattern will certainly help make contributions to the Indie SDK. But will also help us to locate, you know, what is happening when I make this call from the wrapper to be able to follow it down the stack. Yeah, I acknowledge that this might be a little intimidating in terms of just how much is on this diagram. So I definitely encourage you to go and read a little bit more in the documentation of the Indie SDK to see where each of these services responsibilities lie and get more involved. So the future of the Indie SDK so as I described we have these main buckets of functionality ledger wallet and on creds and then crypto provided by Ursa. This is served as well to have all this within the same repository in the SDK. It's a one stop shop for everything. Which fits great if you are coming into the Indie ecosystem full on ready to dive deep. Where it is not quite as helpful is in if you want to only communicate with the ledger. You can't really do that without pulling in everything else. So while it and non creds would have to come along with it in order to do that ledger communication bit. Same with the wallet if you only wanted the secure storage elements, you wouldn't be able to only pull out that piece of functionality and interact with that. And similar story story for an on creds. So as we've worked with the Indie SDK, especially from the context of hyper ledger areas. This is informed a lot of our opinions about the next steps for the Indie SDK. We've come to realize that having more modular components would be extremely helpful. And I'll call out BC Gov government of British Columbia and their team. They've definitely been pioneering in this Indie shared library project. So a huge shout out to them and all the work that they've been doing there have been others from the community as well don't want to minimize their contributions. Yeah. So we have these different components which help selectively choose which components which features of Indie we'd like to use and also kind of minimizes the number of layers in between us calling this API and the actual code being called. So, at the top here I'll call out first the Indie VDR. This is a verifiable data registry that's what VDR stands for. And this is more or less the ledger communication feature bucket that I've called out that is self contained within the Indie VDR as an individually usable component. We also have the Indie shared RS project. This is a collection of smaller libraries which are independently buildable. Indie credits is the main one that I'd like to call out here and this adds or implements the non credits portion or the non credits feature bucket that I called out. We also have data types which has the type definitions for schemas and credential definitions and such. Some utilities for working with binary data encodings and and working with Ursa crypto and some tests utilities as well. So that kind of leaves the last component that we haven't talked about already which is Aries Askar or the wallet feature bucket. This is migrated from being an Indie component to something that's just generally within Aries. And this is a secure storage library which is very heavily inspired by the Indie SDK wallet. So it behaves similarly similarly and that you know it helps us store secrets and also non secrets so just arbitrary data that we'd like to keep secure. This also presents a slightly more permissive approach to granting access to secrets. Within the hyperlegger Aries community especially we find that we need to sometimes work ahead of the underlying crypto libraries and the support that they provide for different key types different crypto suites and implement them directly within Aries and ultimately push those stabilized crypto protocols down to the layers below. So having access to secrets when it makes sense to do so as long as we're you know keeping private key hygiene as a priority in that process opens a lot of doors for being able to quickly revision on the crypto protocols used within Aries. And this last one here probably one of the biggest reasons to actually migrate to Askar today is that it brings with it pretty significant performance improvements. I don't necessarily have numbers right in front of me but anecdotally it's so fast that people are actually experiencing problems with the speed and their older systems that were due to the speed of Indie SDK not being able to keep up. Aries Askar we're really excited about both the performance improvements as well as the improved API and access to secrets as we go forward within the Aries ecosystem. So these different buckets of functionality then to sum up are more or less roughly equivalent to these components that I've called out here. The ledger turns to Indie VDR wallet turns to Aries Askar and a non creds turns to Indie credits. So to put that in other words, Indie VDR plus Aries Askar plus Indie credits is roughly equivalent to the Indie SDK. And this is what we see as the future of the Indie SDK going forward. So we're really excited about these this next evolution of the Indie SDK. While the command pattern has served as well within the mono repo that is the Indie SDK. We also think that having smaller bite sized pieces of functionality will make it much easier for new contributors to join in and more simplify the process of adding new code in the future to each of these different pieces. And with that, that kind of covers my overview for the Indie SDK and I'll hand it off to Lynn for the Indie CLI. Thanks Daniel. My name is Lynn Bendixon. I'm the director of network operations in DCO. And you can find me at Lynn Bendixon and almost all the chat places that you can, you can find in the community. And as Sam mentioned earlier, I just recently got named as the co-chair of the trust over IP utility foundry working group. That working group is is one that helps to create standards for public utilities and help give resources and stuff for people who would like to create their own public utilities. So we can also help with private utilities and that sort of thing, but that's the, that's what the utility foundry working group does at the TOIP. I've been working in SSI, decentralized identity for about three and a half years now. I started helping with sovereign networks and now I'm helping in DCO do the same thing. I'm helping build and manage public and private networks is kind of what I do in as my main operation at DCO. So that's me. Essentially, one of the experts in the field of Indie networks. So the purpose of what I'm going to do here today is tell you a little bit more about the Indie CLI first. I'm going to take a break and then I'll move into some more in depth stuff, but the reason that we're going to go into a little bit about the Indie CLI is to give you some of the background information that you need to be able to understand how the whole network works and what part the users play and how to manage Indie nodes and manage an Indie network. Those things are important if you're going to know anything about Indie node and how it works. Preliminary background information essentially to help you understand where we're coming from and we're trying to do it in a good order, but feel free to ask questions if there's things that are confusing. And we'll we'll cloud forward here as best we can. Let's see if I can. Okay, so the Indie CLI. Daniel mentioned it several times and you kind of give you the preview stuff. This link here just kind of shows that the Indie SDK repo we're going to just zap over here has a directory called CLI in it. We have some of these other things in here. I think they talked about live Indie a little bit with this Indie SDK repo includes a command line interface for us to be able to access the networks and it has a bunch of tools that allow us to access the network and access it through the administrative stuff. So it's mostly a GUI tool. Sorry, a text based tool, not a GUI tool. Other you can make GUI tools and use the SDK to build those things but the the tool that comes with it is just text based so it's not too hard to use and it's pretty easy to install and not going to follow this link here or just refer you to once again to the handout. I believe this link is in the handout. The installation is pretty simple. And if you go to the last presentation that we did I included a link to that as an answer to one of Marius's question in the in the chat. We have a lot more details on how to install the CLI there so I won't go into detail on that. But we'll just refer to it and so you have a link there and can do that. Before we get into explaining very much about what the CLI does one of the first things that you do after you've installed the CLI is you download a genesis file and in this case it's a pool genesis file and it's the thing that's used to connect you to the network. All the clients need to use that file to connect to the network. And so I'm going to tell you a little bit about what a genesis file is. They are the files that are created before a network is instantiated. They include the pool genesis file includes the information about the first nodes that are joined in the network. Usually when I create a network I just do four because that's kind of the minimum. And so I do four nodes in the genesis file. And that means that we don't have a lot of them that we have to deal with. And then you can add more nodes to a network later. So I create it with a minimum number and then add more nodes later because the genesis nodes are ones that really need to stay there for the life of the network. The file that you create containing these information there are the files that create I'm sorry that kind of jumpstart the beginning of a new network and they're literally the first several transactions on the domain and pool sub ledgers. So I keep saying some terminology things that might be difficult to understand because we kind of interchange some of the words so let me take just a minute and tell you that a lot a lot of the slides that we have today will refer to connecting to the ledger or connecting to the network and when we say those two words we pretty much mean the same thing. The network is the network and the ledger is a compilation of many sub ledgers. The two main ones being the pool ledger which is the list of all of the nodes that are in the pool or the network, right. So these these terminologies we kind of use them and interchange them and hopefully you can get used to it as we can talk about it and understand what we're saying but feel free to ask again in the chat or on the the indie rocket chat to and we can help you understand it a little better if it's confusing. So the genesis files are static when you add a node to a ledger you don't change the genesis file the genesis file will stay the same throughout the life of the ledger in most cases so when you when you want to reset the ledgers and times you can you can change the genesis files but that means I mean everything goes away and you start over because it's so beautiful. That's the whole nature of what we do is make it so that you can't change what's on there, you can change the status of some things by adding new transactions later, but these first couple transactions the genesis files and just stay the same all the time. So hopefully that helps give you at least a start to here and I've seen some people ask questions to be a little confused unless I'm taking a minute to try and help everyone understand what a genesis file is and what it's used for here. Okay. Once you have installed the indie CLI and started it up. You can run through a couple of steps to get it so you're ready to run some commands to manage your ledger or to do anything. And Daniel did a great job of introducing the concept of a pool. And there are pool commands here, but when this the quick little note here, when you say pool create. You're not creating a new pool. The network was created by somebody else you're creating a in the case of the indie CLI you're creating a handle to that pool. Again as Daniel mentioned. So these pool commands. This is create in it and then maybe should be a little more clear but you're creating a handle so that you can connect to an existing network or pool so where a pool is just a, again, a group of nodes working together in a network. So you run these commands to create a pool. This is the genesis file that I was talking about. You can download that from a network somewhere. And I'll show you where an example of some, some of those files and a little bit later. The, if you're running your indie CLI from a place where this isn't then you need to put the whole path here. So you might need the whole path to this genesis file. Just a couple of little gotchas as we go along here. The wallet command is this key here is really just a password for your local wallet. The, this wallet create creates a place to store your dids and your keys for those dids and other things like schemas and credential definitions and those sorts of things, all the keys for those things will be in your local wallet. So you need to be careful that you either back that up or keep track of those keys in a way that you don't lose them. The private keys are only in the wallet where you create them. They don't exist on the ledger so you won't be able to get them back if you lose your whole system and haven't backed it up. So this is a little gotcha on that one. Make sure you have a backup of your wallets as you create them. So once you've created a pool and open up your wallet or connected to a pool and opened up a wallet, then you need to put some did to it. And so this is the command for creating a new dead. And I added the seed command on this one unlike the last presentation I did, because this seed is what allows you, it's a 32 character seed that allows you to put the same did into different wallets. So if you're like a, if you're a company that has a steward and you have your the main node operator and you have a backup node operator. And both of you need this you can you both use the same seed for your steward data both of you manage your own node. For example, so we'll get more into what a steward is and trustees and that sort of thing and just a little bit also. Don't worry about that and then to be able to do any of the commands on the ledger you need to use that did or activate it and that's what the use thing means is to make it so that that did is now the active did and you're using that to do your commands on the ledger. And I think that's it for a good start here we're ready for a little break. My watch says 10 after the hour. Should we come back at 20 after what do you think James. I think 15 after would be good unless there's any objections to that. People need more time. 15 after the hour. Thanks everybody will see in five minutes. Okay, I think we're ready to move on here my clock says we've been we had our five minute break. We will just move on with the next section. And this is me as well. So I'm going to tell you a little bit about network management. The real purpose of this section is to give a rough overview of how network is configured manage so that you can have a little bit more background again like I said before about how the network works. Before you start digging all deeper into indie notes stuff. So this is the kind of the overview slide where we'll have a little conversation about each one of these parts. We talked about configuration and maintenance and I will start with why we need network management. So what does that mean when we say network management, essentially talking about the configuration and administration of a hyper ledger indie network. Okay, so, as it says on the slide here, every instantiation of a hyper ledger network should have governance documents that go along with them. So if you're just doing a private network you should you should think about how you want your network to be governed so that the users of your network have some confidence that you're that you're just configured and maintained in a way that they feel comfortable with it. So, the, and then I'll just mention here, you probably already read the slide but the default setup doesn't really match the governance frameworks that people make up the default setup is more for a test network, so that you can get in and do things without having to have some of the, the administrative overhead, but the network administrator writer needs to configure at least production networks right need to configure your production networks to, to make it so it matches the governance and it's more secure. So, we'll dig into to what that means here as we go along so before we dig in too much further you need to talk a little bit about the roles. I'll refer you to a previous training where we talked a little bit more about roles and go into a lot of detail, but here we'll cover just kind of a high level. Hopefully it's, it's, it's enough here that we can, that we can continue on without spending a lot of time on it. But, you know, at a high level here. The trustees are the administrators of the network. Sorry, opening up my notes here so I don't forget anything. The trustee role runs all the administrative type transactions for the ledger. They do the node management, the NIM management. Another one of those words we use, because that's what is called on the ledger itself, most of the commands we use, but those are essentially did so NIM and did or interchangeable terminology wise. So, also they change the author rules, we'll talk about author rules in just a minute, and a lot of detail, and then they also manage the transaction author agreement. The trustees are responsible for. And then, and usually it takes multiple trustees to do those actions. Next on here is the steward. Oh, I forgot to mention the roles on the network are these ones right these are the roles that exist. They set up your author rules to make it so that each one of these is different right the way the author rules are set up. The authentication rules determines what these roles can and can't do. So you can, everything that I'm saying here about what these roles are are the roles that these names have on in the governance frameworks that are the ones that I'm most familiar with. So the sovereign and in DCO governance frameworks are the way that I'm talking about these roles. And again, you can set them up however you want. It's a it's a configurable for the most part. Entirely, but the, you know, the things that these guys can do is determine by author rules. And we'll talk about that more in just a minute, but on the governance frameworks that I deal with the steward only deals with his own note, you can add or remove his own node from the network and look at the node called validator info for his node. Another name for a node is a validator, because they do the validation like was talked about earlier for consensus, and then they can add one or more network monitors to help them manage their node and monitor it make sure everything's up and good. So that's what a steward this. It's more powerful early on and it's more powerful on test nets by default, but this is the, this is the, the new thing where we, the new governance kind of compartmentalize it so trustee is just administrative, the steward just runs his own node, and the endorser is the only role that can write transactions like schemas and credential definitions and those sorts of thing you have to be either have to be an endorser yourself, or you have to get an endorser signature to be able to write those kind of transactions to the ledger, and then the author. I'm getting quotes here because there's really no role on the network that matches that but we call it an author outside of the network. And that means it's actually no role on the network role equals blank. And they can build ID transactions, and then have the endorser sign them. They still have to have their did on the ledger in the, the no role thing, but they can't actually write the transactions without an endorser signature. And then the network monitor role I kind of mentioned that what they, all they can do is get node information. They can't do as much as a steward does with adding removing nodes, they just get information about the current status. So I saw a question in the chat says what is validator node. Remember when we were talking about a pool. Thanks for the question by the way that's that's an excellent question. We were talking about a pool and we said there was a bunch of nodes on the network in a pool. That's what a validator node is it's just one of those nodes. It's all, it all means the same thing we use a whole bunch of different words for the same thing. Each steward can have one node on the network. And that node is called a validator node in some contexts. So that question makes it more clear. The terminology like I said is not. We use a whole bunch of different terms to mean the same things and that's what that is as a, another, the word validator is just another word for node and validator node is putting a book together. Okay. Now that we have a little bit of background on what the roles are we can start talking about what the some of the things that the trustee can do and the actions that they can perform and the configurations that they need to do right so one of the first things that a trustee needs to should do on a public production network is to build what's called a transaction author agreement or TAA. Actually the trustees build it and then the, and it becomes part of the governance framework as it says down here, and they create this huge long thing and written in kind of a legal lease style of language it's kind of a, it's like a EULA like an end user license agreement almost because, and it sits on the ledger. And the point of it is to help people to to realize that there is a governance framework in place and we don't want you to write anything illegal or any PII or sorry, any personally identifiable information or PHI anything private like that shouldn't go on the network. Part of the TAA is agreeing to that and agreeing to not write anything illegal. And so, essentially when a network has this transaction author agreement set up on it and enforced, then every time you get, when every time you want to write to the network you have to do some level of agreeing to it and then the way you agree to it is also has to be written to the ledger the possible ways of agreeing to it are in the acceptance mechanism list or AML. So, this AML is also part of some of the governance documents it should be public and people should know what those acceptance mechanisms are. And this is just literally a list of the approved ways that you can accept the transaction author agreement. So, let's see if we can show you an example of that I'm going to try and pull that up here so you can see what we're talking about. Okay. Earlier, I mentioned that I was going to show you where the Genesis files were and that's here for for the in DCO networks, we have a place on on GitHub where we store that. Right now we're talking about the TAA and transaction author agreement. There's two steps to setting that up on your network and the first one is to to build and post this acceptance mechanism list. And so we'll just show you what one of those looks like up here at the top is to the basic thing that goes on the network so you can see what it is. And then it gives you in this document it gives you some of the descriptions about what each one of those is. And then we don't have time to go through all of this and I'm pretty sure I put something like this into the handout so you can go have a look and peruse it on your own line on your own time. So what I will say quickly is that usually in the in DC li we use for session. So, right when you start up your in DC li and you connect to a pool. If you have this for session thing set up in your config file for in DC li then it will show you the the TAA and then ask if you want to agree to it or not so the in DC li has a way in it so every time you connect to a ledger that you might start writing to you've agreed to it for the whole session and just have to agree to it once in code. These other ones come into play and it depends on whether how you've agreed to so if somebody if you're a enterprise corporation that has a clients and they sign a service agreement to say they'll never write anything bad to the network at the beginning then inside your code. And say they'll they signed the service agreement and then they don't have to keep shining it over and over again like they do for the CLI. So there's that's just a small example of that. So, and then we'll show you the TAA here really quick. How much of stuff. Excuse me. So there's a bunch of legal lease in here and has obligations of people who want to write to the network and what they're agreeing to. So, that's the. So I wanted to show you there. The intro to TAA now we're talking just a little bit about how to write that to the ledger. There is a document. I hope that I put that on the in the handout so you can see it. But I wanted to show you that how putting it on the ledger is very simple. There's just two commands. The first one is this ledger transaction acceptance mechanisms command. And then after that you just copy in the acceptance mechanism list that you saw on the other page. And then you give it a version number and something else. So this document makes it super easy because you can just cut and paste this kind of stuff and it has this context here where actually gets that exact file from or has a link to the that file I was showing you on in GitHub. So that's what this context thing is here. And then to add the transaction author agreement. First you download it and then say file equals because it's too huge to actually copy in like this other one. But there is a kind of a funny thing I wanted to explain really quick here. So when you're setting up your TAA, you'll you'll use this ratification time stamp thing when you first set it up. I needed a quick drink break there. So this ratification time stamp is just a Unix time stamp. This says when that new TAA was approved by whatever board of approval needed to approve it for your network. Okay, so you put that in there when you first start this retirement time stamp. I'm sorry this this is the command you can use to get some more help on how that works and what it's all about gives you some examples. But I just wanted to bring up these time stamp things because it's a little bit weird how it works. So I'll give a brief explanation. Hopefully it's not too confusing. You can have more than one TAA transaction author agreement on your ledger at the same time. And the reason is in code, you get when you there's a SDK call or that you can use to get what's called a what was that called again, Adam. The digest digest. Thank you, Adam. So you can retrieve that digest and then you don't ever have to go and get it again. You just keep using that same thing. And if you want to update the TAA on your ledger, then that old digest is no good anymore. So somebody's been running some code for a while they're using this digest because they have a, you know, a contract or somebody and that all of a sudden stops working. Well, they didn't want that to happen. And so they made it so that you can have two of them on the on at the same time and that the old one will still work for a period of time until it retires. And so you when you have a new one, you put a new one on with the ratification timestamp for when it was ratified when it starts. And then for the old one, like the first one you put on your putting a second one on now right for this for the for the old one that's on there that you want to to replace you inform your people your customers or whoever's using your network. Okay, we've got a new TAA you've got 45 days two months to to move to the new one, but both of them will work in the whole in the meantime so you don't have this hard stop of. Okay, everyone has to change their code is exactly the same time. Does that make sense. So the retirement timestamp says okay we're going to put the new one on and then we'll give you 45 days or two months or whatever to switch over to it. And the old one goes away. So they can change at their leisure during that that time period that's in between there. Okay. The next thing we need to talk a little bit about. I asked a question in the in the chat. Eduardo, the retirement timestamp is is put on later, you'll rerun the same command. So when you first put the transaction author agreement on it. When you're already have a new one on so you put your new one on the second say we have. You go on and you save the command that you used when you did it, and then run that same command and add retirement timestamp on it. After you put the second one on the ledger that make more sense that so you don't know the retirement timestamp when you first do it you don't know that when you're ever going to change it, but you'll rerun the initial command to put it on the ledger with a retirement timestamp on it to add that to it later. And then I think he was referencing the ratification timestamp in his question. So I think he's asking, when we're publishing the ratification timestamp, if the timestamp for ratification coincides with publishing that to the ledger how would we know that in advance. I am misunderstood I read it wrong. There's, there's a big long word that starts with an R sorry. So the, whoever approves the the TAA when you're when you're writing it to the ledger and if, if you don't really know when that is then it can be a date that's, you know, starting at the time that you put it on or earlier, it doesn't have to be exact it's just a time that it says when it starts being valid on the ledger. So it's not super critical but what I've done is I've asked the, you know, I've gone back to the trustee, the board of trustees meeting, where they said, Yeah, this is TAA is approved. And then I take that date and put that on there. So it, like I said though it doesn't have to be a perfect time. Okay, I need to move on a little bit here. We'll continue answering questions about that in the chat if you have some more. Thanks for the question though. So this next part. And you thought the TAA was a little confusing this next part off rules. I don't know how many of you've done dug into it or he looked at it at all. But it's pretty confusing even for experts in the indie community to understand what these things are and how they work and how to set them. And so I'm going to talk slowly and go through it in enough detail that at least you have a start for how to essentially understand how to change the off rules on a ledger so that you can set it up to be more secure. Like I said earlier, the default off rules are not very secure. For example, a trustee can add another trustee. And that's a lot of power. So, you know, trustee adding all their friends as trustees can mean they can control the whole ledger so you don't want that. You don't want it to be more signatures than just one trustee to be able to do things and that's what author will stress where he said tells you how many signatures are needed right here it is for every possible transaction that's on the network. Okay, the author rules that you set up should be based on what you see in the governance framework for your network. The governance framework needs to be set up in a way that it defines how, you know, how many signatures and who's in control and how to do these sorts of things, especially for a public network that is a production network. So, to see what off rules are currently on your network in the ND CLI, you just type in ledger get off rule and we're going to show you an example of that and teach you first how to decipher what this mess here means. At first it, it seems a little bit easy, like pretty easy to understand but how to read this is, and to compare this to the other lines underneath it or not, not as simple so I'll go over it in detail here and then we'll look at some more of them and look at the differences and hopefully help you to understand a little bit better. Okay, so these columns are kind of in a little bit of a different order than the way that you would read in English what is going on here. So, let's see if I can get this in a way that's helpful. So I usually start here. So this particular row of the table. And I need to mention again that each row of the table corresponds to a command that can be written on the ledger, a command of an away or a specific writing of something to the ledger. And the each row is a different one. And there's 50 some odd rows. And that's all the ones you can do. You can't go and add to the author rules table, for example, some of you may have seen online that there's an author rules table and it feels like oh I need, I want to add a new author well, unless you go and change the code and make it so there's a different thing to add something else, the, then, then there isn't any more things and so when you add, sorry when you change one of the author rules, you don't add any authors we just change them. When you change one of the author rules to adjust how many signatures are required for that action. The five rows here are static, you just put all of those on the command line. And then this is the thing that you're changing the constraint for that particular command. Okay, so each command is depicted by using these five different values for it. So this first one here. We're trying to add a name, just a did, again, we're trying to add a did with the with the role of zero. What is zero. I should have put it in and I'll add it later to the handout. Zero is a trustee. So this row is add a new trustee to the network. And this old value is dash here and in the in the command you'll see later that they'll become a star. That's what if you're adding a new one using the add command, it doesn't exist. It's not there right now. So, so we're going to add a did to the ledger, and that did is going to be a trustee. How many signatures does it take and what kind of signatures does it take to do that is what's in this constraint field over here. And whether you need to be the owner of that thing that you're changing. And so we're for that command, right now it requires one trustee signature on the DC of test net. That's what it is here. This is the one I got on to to show the example was sir. So that's the basic explanation of how to read one of these lines here now I think we're going to go out and jump right back in here and show you this other screen over here to give you a better idea of. That's probably kind of small see if I can get a bigger. So to see that. Ledger get off the rule so I had another slide right. So this may be a little bit small apologies for how small it is here, but we scroll up to the top of that command and we see that thing I was showing you in the. On the slide, that's the first one of those and you see also that there's a whole bunch of them like I mentioned, and each one of these corresponds to a. A row in the off rule table and each row is a different command that can be performed on the ledger right and like I said these don't change it just. This is how you refer to this row. So if you want to change how who can add a steward did to the ledger, then you would go over here and you would change it if you wanted stewards to be able to add other stewards. Let's put over here to and put another number here so let's see three stewards can add another steward then you would change this to a to and this to a three and so you could say three stewards can add a new steward if you want to. And if you want to say well trustee or three trustees can add it or three stewards can add it, then you put in this little constraint here so we have one down here where there's an or in it. So adding an author see how it's blank here. That's the author did if you want to add an author did to the ledger, you have to be a trustee zero here right one trustee or one. That one oh one is an endorser. So an endorser or a trustee can add an author NIM. To the indice of testament. And so as you scroll down through here, and some of these look a little more complex, but this one is not as hard as it looks this isn't or and see how the need to be an owner is true. So essentially this just lists all of them and says that anyone can add an attribute to their own did. And all the schema one is just like that other one where you have to be an endorser or a trustee. And this is not how it looks on the main net I'm pretty sure only an endorser can add a schema when we're on the main net. Oh, and look at this. See how there's a command somewhere in there, somebody created a command or at least a table entry for editing the schema. And that's forbidden. You can't do that. It's not, you have to add a new one with a new when you add another schema you have to add a new one with a new version number, etc. And the credential definition stuff as we go down through here you see some of the, some of the commands that people can run. And here's our, some of our pool commands, we'll talk about those a little bit more in just a minute. But the people who can do pool commands are just the trustees. And again on the main net all of these, all of these have three signatures on you, we don't want just one trustee going out and doing commands on the network without everyone being able to to manage it. Okay, so I think that's enough of looking at that. Let's go back to our slides here. And what we're seeing in the slides here is the actual command to change one of the authorals. And what we'll do is we'll switch back and forth a little bit so you can see that we filled in the action field with the example I'm showing here is exactly the from here so the action field has an add on it here. It's a role field and name type. All of those have to match. And, and like I said earlier the dash and it needs to be a star when you send the command in. All of those commands match and then you can change this constraint the part you change is this part right here to say how many can do that particular action so this, these define the thing that you want to do on the network. So if you want to add trustee. This is the constraint for it. And so this one actually changes it. What it said in the last slide one trustee. Now you have three trustees required to do it if I were to send this command and three trustees would be required to add a new trustee to the network. Okay, so now that I told you it takes three signatures to do something I thought I would should tell you a little bit about how to add signatures to a transaction how do you create a transaction and then get multiple signatures on it in the in DC Li. And this is the command right here, essentially you add a send equals faults to the end of your normal command so this is adding a pretend did did serve a longer than this and burkeys are longer but I shortened it to third of the example here. So we have a did and a burkey with a roll up trustee. Okay, this is the command that we just changed. And then we put in send equals faults. And what that does is it spits out a transaction to the screen that you can then send to the people who need to sign it. So if you're a trustee, and you run this, then your signature will be on the transaction and you need to send it to two other trustees to get them to sign it. And you can actually send it at the same time to both of them, and then combine the resulting signatures just added to the signature list. And when they've signed it, and this is the command you tell them to say, hey, hey, run this command on your thing led your sign multi transaction equals the result of this one. And then they return that to you and their signature is in there. And as an administrator you can then combine the signatures and send the transaction to the ledger and and it becomes whatever it is you needed to send. So that's the sign multi for the Indie CLI. And then a few other things that administrators need to do is they need to be able to do some node management. So we type in ledger node, and there's lots of different things let me. That's my window. Oh, there we go. So we'll go back to the Indie CLI here and show you some of the options for ledger node and show you how to get help for it if you need it. So ledger node help gives you the help for it. And look at all these different commands here. So I give you they give you an example of adding a node to the ledger here, where we have a target, which is the which what came from running in it. Indie node on your node, and then node IP import client IP import and alias make sure you use the same alias everywhere. I'll got you if he's a different alias that has us all kinds of troubles because that alias name is used in directory names and stuff. And then services equals validators what turns on your node as a validator. So if we were doing a demotion, we would say, or removing it from the ledger we'd say services equals blank. And then, so this is adding a node though when you add it usually you set it up as a validator, and then you have to use to your BLS key and your BLS keep up to get it on the ledger. And then if you want to just change the second example here is changing the node IP and the client IP. We don't want to do that for a genesis node for for nodes that are added later it's okay to change their IP addresses and doesn't cause a lot of problems. So, those are some of the ledger node commands and the, the services equals blank and services equals validator. That's what adds and removes the node from participating in consensus or being a part of the ledger. So, and then there's also another command called pool dash restart. And you just list the aliases of the nodes that you want to restart. A lot of times and a node will start having some troubles and a lot when I as an admin, I will just go out and do ledger pool restart with the node name to see if that fixes it. And about half the time it does this. Sometimes it just needs a little restart to get things going. And then this last one here as an admin you'll be managing the names that are out there for example if you want to add a new node to your network, you'll have to add a new steward so that they can do the commands to add their node to the network. Each steward can do one node on the network. And so you have to add that steward did to the network before they can run the ledger node commands. And so there's a few different options for that and just let you do the try that out on your own and figure that out yourself. So, we talked about this can just start repeat slide here we talked a little bit about network management and why we need it, and then the TAA and the author rules you've spent a long time on that because it's a little confusing. Happy to answer more questions on that. And we also talked about network maintenance and and what some of the roles do and and how they all work. Hopefully get a little bit of terminology background and understanding as you dig into any node to to understand how it all works. That's all I had. The questions I should answer before I move on here. There's a couple in the chat that we're looking pretty good to me for addressing. Yeah, I see. What happens if a Genesis node goes offline or lost. Genesis nodes can go offline. You're probably okay if one of them does essentially every client that tries to connect to the to the network is going to use that the IP addresses in the Genesis file to try and find the network. So if you know if you only have four in there and you lose one that might be okay. Once you've lost two or three though and you only have one Genesis node left. It might be a delay for the client to be able to connect to it so the initial nodes should remain as constant as you can and it makes for having a rough time if they if they don't. So one thing that there are ways to to update the Genesis file to be bigger. I've done it once. It's kind of a mess. So it's not preferred, but essentially you can replace the pool Genesis file with a current copy of the pool ledger. The entire pool ledger. We did that once and it worked out okay. It just became a very large file though that people had to download and had a whole bunch of stuff in it and it looked a little funny, but it worked out okay. So that there is a way to to work around it. So I answered your questions. Kevin and happy Jeep. Oh, there was another question. Is there an easy way to read the Genesis files via Indy CLI? I don't know. I just usually look at them in text. I download them and I look at them in text file. I don't know that the Indy CLI has a way of displaying the Genesis file. I've never seen it. And that's all I had. I think we're on to you, Adam, for Indy CLI. We drew straws and Daniel came up short. Yeah. I'll actually put him here. Yeah, I'll assist wherever needed. Yeah, I'll briefly add anecdotally to Lynn's answer there about the Genesis notes being offline. There was a period of time in the development of areas cloud agent Python. Actually, I think it was it went all the way back to the Indy SDK, but an error would be displayed if it failed to connect to one of the Genesis nodes. And for the sovereign builder net, there were a few nodes that had been decommissioned or something along those lines. So there would be several that would show up on startup to my but it never actually hindered the performance in that case, at least so Genesis nodes. There are some resilience there, I guess, to the unavailability of Genesis nodes. Cool. So I will be talking about Indy Blenheim. I'll start off with a disclaimer in that. Nathan George on the hyper ledger technical steering committee. He has a phrase he always used to say which was, I'm not a cryptographer but I play one on TV. I will say that I am not a cryptographer, nor do I play one on TV. So I cannot claim to be an expert on the topics brought up here. But I know at least enough to share my knowledge and I might rather than answering questions directly direct you to better resources than myself. So hopefully that's adequate to answer the questions that come up today. Hey Daniel, sorry to interrupt. Hey Lynn, I still see your screen. Could you stop the screen? Thank you. Continue Daniel, sorry. No problem, thank you. Okay, so coming back to our stack diagram to orient us to where we are in the stack again. We've talked about these portions. We are now at Indy Blenheim, which depends on Ursa through the Ursa Python wrapper. So hyper ledger Indy Blenheim is a Python project. It's written in Python. And according to its own project description is a Byzantine fault tolerant protocol. So what that means and some more words is that it provides a framework for consensus which is specialized for identity systems. And this is what we are using within hyper ledger Indy networks for consensus. This is actually a question that was brought up in the chat, but it was a really good one. And Steven Kern actually happened to answer graph the cuff. I don't even think he's currently attending this workshop, but he had a really good answer. So a contrast between hyper ledger Indy and hyper ledger fabric briefly. So the question was raised. Why would we use Indy over fabric? And the very short convinced answer is hyper ledger Indy is purpose built for decentralized identity. Everything about the structure of the project is oriented towards the support of the IDs and entering a non creds verifiable credentials objects to the ledger. So in that sense, Indy is is crucial for in a non cred space credentialing system. Using fabric. It's also possible to have a store for the IDs. And sometimes that's sufficient for verifiable credential context use case, whatever, specifically not using a non creds. If you're just going with the JSON, JSON, LDW3C credentials, you could do that on any network. So Indy is specialized for identity systems. There's a question brought up from Jim. Why wouldn't Indy build a service that fabric could use for identity and credential management as an option to the MSP support? I will be honest. I'm not sure what MSP stands for. But there I will say that there's not any strong technical reasons why there could not be opportunities for integration, cross integration between fabric and Indy. I will dig a little bit more into Plenum and some of the Indy specific sides of the stack. And that might help clarify where there are integration points between Indy and fabric. And hopefully that'll answer your question a little bit more in depth for you. So Plenum uses RBFT as the consensus algorithm that stands for redundant Byzantine fault tolerance. I'll talk a little bit more about that in a moment. Plenum communicates via Curve ZMQ, which is an encrypted version of the zero MQ protocol. However, Plenum also provides abstractions for implementing other communication protocols if you are interested in doing that. All production deployments of Indy today are just using the box standard built-in Curve ZMQ, but there's opportunities for other communication protocols. State storage within Plenum is handled by a Python implementation of Ethereum's Patricia Try data structure. And then the actual data itself is stored using a key value store. I believe there's support for everything from in memory to I'm blanking on some of the other implementations that I saw in there, but level DB is what is used most commonly in production deployments today. Specifically RocksDB, if I'm not mistaken. Is that right, Adam? Let me clarify that. Yes, RocksDB, absolutely. Which is, I believe, a fork of the level DB project. So there's a good question in the chat here that I'll briefly address. So can Indy be used in place of digital signatures and data signing encryption decryption? So Indy, as a stack, is a ledger designed to support a decentralized identity ecosystem. So building with DIDs or decentralized identifiers, which have keys associated with them through a DID document. So the kind of abstract way to think about DIDs is and ledgers supporting DIDs is that you have a global key value store where the key is a DID and the value is the DID document. Inside of that DID document, you can have any number of keys and specialize those keys to one use case or another, such as this key will be used for signing, this key will be used for a symmetric encryption key derivation process as opposed to putting the symmetric encryption key directly in the DID doc, of course. So to answer your question in short, Indy has crypto primitives associated with the objects you can use and create with Indy to do signing, encryption and decryption. So it's pretty flexible in that sense. To expound on that a little bit more verifiable credentials, which Indy supports through the anonymous credentialing system, use CL signatures or commercialize N-SKIA. I'm not sure if I pronounced that correctly. Signatures, which are a signature type that supports zero knowledge proof. So being able to prove without actually disclosing the value that the credential contains and prove it cryptographically to another party. So I would say even Indy supports more than just the standard signing process and goes on to implement even more privacy preserving variations of signing data. There's a good question in the chat. I'm going to direct you to rocket chat for that one. So can the master's secret be accessed? So I think Adam can assist a little bit in that in rocket chat. Okay, so coming back to Plenum to round off the last thing I just said, data storage is handled by level DD and production networks. Rocks DD more specifically. So what is a consensus algorithm? I'll start again with another disclaimer that again I'm not a complete expert on all things consensus. There is a huge amount of academic papers on this subject. So I will not pretend to be in that space in terms of research and description of these consensus algorithms. So I will give the boiled down sort of layman's version of what a consensus algorithm is here. So in my own words, it is the process by which multiple actors coordinate to agree on some data value. So we have peers who are collaborating in order to determine what truth is and I put truth in quotes there because it is a process determined by what the whole group agrees upon by a majority. And these consensus algorithms are designed to prevent bad actors from injecting values into the network that are considered false, at least by consensus to recursively use some terms here. So ultimately it is through these consensus algorithms that we achieve decentralization of what is truth. So rather than just having one centralized authority where all parties except the central authority are dependent on the central authority for what truth is. It is a collaborative process and it is possible for disagreements to occur whether that's legitimate or illegitimate. And the consensus algorithm is what is required in order to have strictly ordered in most cases and agreed upon values for what truth is. So to call out a few consensus algorithms that are in the wild. And again, this is not super detailed by any means. More just a allusion to some consensus algorithms that are out there. So we have proof of work based algorithms such as Bitcoin and Ethereum as of today. I believe still unless I've missed some news. These are our consensus algorithms where consensus is achieved at least in part by the proposer of the next state winning a cryptographic puzzle. This is a relatively energy intensive process, but it allows anyone to join in this pool of peers contributing to this consensus algorithm. So it is considered to be a permissionless system and the data is accessible to anyone so it is considered to be public. It's possible to deploy a proof of work system in a private environment as well into permission who is able to join. So the fact that I call these out as being public and permissionless is not universal by any means but tends to be how the proof of work consensus algorithms are used. In contrast to proof of work we have a proof of stake systems out there in the wild as well. These are again in very simple terms based on individuals within the pool having a certain amount of token that they're staking in the process of joining in the consensus algorithm. Again these tend to be public and permissionless, but it is also possible to for instance deploy an Ethereum contract within a private environment and use it for a government jurisdiction and use that to bring down gas fees while still taking advantage of the decentralization that such a network provides. So separate from these proof of stake and proof of work systems we usually have another kind of grouped category of consensus algorithms that I'll just broadly term BFT or Byzantine fault tolerance. If we were to take the exact definition of Byzantine fault tolerance proof of work and proof of stake are technically as well Byzantine fault tolerant. So BFT is more a reference to a specific type of consensus algorithm that is still retaining this tolerance property. So in the BFT camp we have practical Byzantine fault tolerance which is what plenum actually used to be based on. We also have redundant Byzantine fault tolerance which is what plenum is currently based on. And then there's other Byzantine fault tolerance protocols out there such as federated, I believe that's used by like stellar for instance, and there might be a few variations of this federated Byzantine fault tolerance as well. These are again very highly technical and what the differences are between these different protocols or algorithms I should say. And of course there are many more consensus algorithms out there in the world today with the prevalence of cryptocurrencies and such. So plenum uses the RBFT algorithm as I said just a moment ago. I will not be getting into the details of this algorithm today, but I have linked in the handout which I'll drop another link in the chat just in case you have missed all my other postings of links. There's links to very detailed academic papers of the protocol if you're interested, as well as a more me friendly at least description of any plenum specifically, and it also contrasts it with the raft consensus algorithm used by fabric. So those links are found in the handout to give at least a vague idea of what's going on though. A client using the Genesis transaction that Linda was just talking about will send a request to the nodes contained within the Genesis transaction to discover or uses the Genesis transaction to discover the network and then the transaction is sent to each of those nodes. The nodes go through a process of propagating that request to all the nodes in the network. Then there's a pre prepare prepare and commit phase, which by the end of this commit phase, the consensus has been reached that this state is the next valid state for the ledger. And that data is committed and a reply is sent back to the client. And then in the RBFT version of this protocol, there's a redundant process going on here where I believe this pre prepare prepare and commit phase are performed a number of times across perhaps different portions of the network. Again, I will direct you to the paper for more details on that. So why RBFT for plenum. There's a few benefits that I'll call out. These again are coming from delay person. First off low latency finality. This is a term referencing the amount of time elapsed between submission of the transaction and actually being committed to the ledger for sure is low when compared to other ledgers. So as a quick anecdotal example. So submitting a transaction to Bitcoin. It can be some time before you're actually sure that your your transaction has taken and that it will persist in what is considered to be the main fork of the Bitcoin ledger. So in contrast to that, the time it takes for the nodes within an indie network to agree upon the state of things is pretty darn quick. Which is very beneficial, especially in the context of a decentralized identity system where we don't need and especially don't want to be waiting around for my schema to be written to the ledger before I'm able to create a credential definition off of that schema and subsequently issue credentials. I think there's even occasions on other ledgers where you're never totally certain that your your transaction was ever written to the ledger and there are different DIT methods out there that work with this limitation of the networks. It's workable. But in the case of indie this low latency finality is a definite advantage. So again as a result of not being dependent on a proof of work system nodes are more or less idle while not handling requests. So we're not constantly churning through hashes to find the next answer to the next cryptographic puzzle. The nodes can just sit there and wait for the next request to be received before waking up and doing any work. And wasting energy. Additionally, node operators are not incentivized to win the next block in in the ledger. So this tends at least to mean that the nodes are more inherently altruistic they are participating in the consensus algorithm because they believe in the principles of decentralized identity and would like to make that system of reality. However, that being said, it is not impossible to incentivize nodes to participate and have good behavior. It's just not an inherent part of the consensus algorithm itself. And then finally and most importantly in my opinion, it gets the job done. And what I mean by that is that within the indie ecosystem, we recognize that it is crucial to have a decentralized route of trust. We need that in order to break the power imbalance between holders of credentials and and the issuers of those credentials and make it so verifier and issuer are unable to collude about the holder without the holders permission, or instead go straight to the issuer of the data. Without the consent of the holder. So this decentralized route of trust is absolutely crucial to that system. However, the consensus algorithm itself is in my unfiltered opinion, one of the least interesting aspects of indie. We need it and we need to work well, and we need it to be, you know, adequate for its purposes. But when it comes down to it, it's the systems we're building on top of this consensus algorithm on top of this ledger that are far more interesting and far more important. So for the most part, there's, you know, if another consensus algorithm proved itself to be of better value to the indie ecosystem I don't think the indie community would have qualms with updating consensus algorithm to another system. Okay. Yeah, I think that's all I've got to say on that one. So, in our stack diagram that we looked at a moment ago, we are building up towards the indie node section of the stack. So it warrants a little bit of discussion what the relationship between plenum and node is. And this is an analogy that I like to use in my in my own brain to help understand the relationship. It's not a perfect analogy. So take it for what it's worth, but to to put forth the comparison at least web frameworks provide tooling for users of the framework to write request anglers. So if I go to, I don't know flask for instance in Python, it provides a nice set of, you know, decorators around methods so I can write a handler for this path object. Plenum in like manner provides tooling for users, in this case, very specifically node to write their own transaction request anglers. So plenum is handling the committing of the transactions taking care of all that consensus stuff in the background, distributing it to the nodes and then ultimately arriving at the committed state. Plenum is taking care of that. Most of the heavy that think for that at least. And then plan provides base classes for handlers for both reading and writing of operations to the network. Plenum extends these these handlers with specific transactions for the different transaction types in the network. So to elaborate a little bit more on that. So plenum has support for core transactions. So transactions that are fundamental to the running of the network itself. I believe this is mostly limited to basic nim support and the pool management side of things so introducing a new node to the network. There's at least a core set of functionality within plenum to provide that. So plenum can stand alone. It is not dependent on an implementation within extending it to be functional. But then node goes on to add additional handlers for all other indie transactions and we'll get into more details about some of these transactions as we discussed node, but a quick sampling here schema and retrieving schemas creddes and retrieving those creddes and retrieving attributes. So this next portion here, I'm going to talk a little bit more about these base classes in preparation for our next portion of the workshop which is going to be hands on with node. So we'll see classes inheriting from these classes inside of node. So plenum, as I said, exposes a base class for request handlers, and then there's a right request handler, which inherits from that request handler, and is inherited by all transactions that will result in a right to the network. Plenum also has a read request handler, which is in like manner inherited by each of the read transactions for the network and also inherits from the request handler. These request handler base classes have a set of abstract methods that are implemented by subclasses. So static validation and the base request handler. So this, in other words, this is common to both reads and writes. This is validation of the transaction itself. So the transaction request. So did the role associated with this name transaction is that actually in the list of known roles for the network. So that's what is referred to by static validation. The right request handler has additional dynamic validation. And this is the sort of validation that you'd want to check later on in the process of processing a transaction, when you're actually checking to see if the results of committing the transaction to the state is valid. A NIM already exists on the ledger, for instance. And the transaction request is updating a verki verification key associated with that NIM. It ensures that the transaction itself is properly signed for the target NIM, and that the NIM already exists on the ledger as well, and any other validation that's required that goes beyond just checking the raw request as it is received. So unique to the right request handler is also the update state method. And this is, if we were to go back to the web frameworks analogy, this would be the handle method. This is what is actually doing the state transition for writing to the network. So for the most right request handlers, this is the interesting bit of code to keep an eye on the update state. And then finally the read request handler has a get result, which is the handle equivalent for reads or just get the stuff and give it back to the client. And an example here briefly from plenum NIM handler, I've omitted most of the data, most of the like actual implementations of these methods. And I do that mostly because the differences between the NIM handler within node and the differences between with the plenum's version of the NIM handler, they're actually quite different. So to avoid confusion on what's actually taking place between the NIM and plenum and NIM and node, I've omitted some of these details. But this is a NIM handler. It inherits from right request. We have, you know, generic init for Python, static validation, nothing is taking place in the base NIM handler and plenum, a little bit more occurs in the node version. And then we have the additional dynamic validation. As you can see here, the first line is grabbing some data out of the request and proceeds to check it. And then the state is updated with this update state method here. So as a summary of what we've just gone over then we are looking at indy plenum in the stack. We'll be looking at indy node next. And again, just to reiterate indy plenum provides the framework that indy node builds upon to implement all the actual interesting transactions that take place in an indy network. Okay, I'm going to do a quick scan through questions in the chat and see if there's anything I should answer directly or should just be handled in the chat. I think there's some good answers already going on there. So I'll refrain from addressing any of those directly. So next we have another brief break. We are at 128 my time here. I think, let's see, Jane should we go for five or 10 minutes for this one. I think five minutes is all right unless anyone has any objections to that. Okay. I'll make it around seven minutes and we will return here at 35 minutes after the hour. Sounds great. I'm going to jump in and add something for anybody is still here and hasn't run off to a break. I've been looking in the indy channel on rocket chat, and I'm super happy to see that other people from the community or from this call that are watching that are answering questions and how to, you know, as a community effort to answer some of the issues that people are having and working together. I think that's awesome. I'll go ahead and answer a question that I saw from from Harrison Tang and zoom chat here is a great question about the incentives for running a network. It depends on the way the network is set up or what governance is was set in place. Some network cooperatives have different incentive programs. But really at the heart of it is a it is a volunteer program where companies or organizations who are interested in developing decentralized identity might choose to operate a node to get familiar with the technology to get in depth view of how it works. And we've seen that organizations that run nodes typically have a shorter learning curve and getting their own decentralized identity solutions launched on a production level network. For example, in DCO. We have 26 volunteer node operator organizations. It's they've chosen to operate nodes to support the various ledgers the main net. That's not the demo net. And DCO does offer incentives in terms of, you know, free resources and free trainings for node operator participants to provide them additional value but at the end of the day it is a volunteer program so and other networks might have different programs. So, I hope that answers your question. All right, I think that's five minutes. Can I get a confirmation that the intermission side is being shared the credit. Great. Thank you Daniel that was awesome going into platinum. As a reminder, the handouts needed the prerequisites important if you haven't done so yet. And any questions please ask on rocket chat. So we've been building up to this in the node. This is built on top of platinum which is consuming Ursa and just as a reminder in our stack of where we're at. We're going to do a slight overview of it and then do some code exploration and then we're going to actually try and add some code as a hands on exercise. Indie node is written in Python. It extends plenum, but it is not plenum, and there's not a circular dependency of plenum on node node extends plenum. There used to be some circular dependency but we've moved past that. The node defines as Daniel mentioned, much like handlers in a server, the transactions, their schemas, and ultimately the interface used by clients. We've looked at the plenum right request code and their abstract methods that in the node implements, as well as the get read the read transactions that also have abstract methods in the node influence. Indie node provides the decentralized route of trust needed to enable verifiable credentials exchange and enable identity owners to choose with whom they share their data. Right, we talked about this several times just hammering home the need on this model model that there needs to be a place where a verifier can go and get the public information needed to verify holders credentials. And this is important because it removes the ability to discriminate against individuals and how they use their credentials. And the ledger of course stores those public cryptographic artifacts. So we jump into the code or even think about code. I wanted to talk about how there's, there's actually a read me on guidelines of how to contribute. They prefer test driven development, watching to see that tests are implemented for every feature that's added. So they want document documents updated with important information around things that have changed. And as you're coding if you find problems in the code add to do is add fix me fix fix me in the comments. So that's easy easily searchable, as well as raising issues on GitHub, where appropriate. Previously we're using a Jira board for issue and development tracking. I believe we have now switched over to get hub and this is recent enough if you wanted to go and get an easy pull request on the repo you could go and update the documentation to reflect that. We follow pep eight coding standards. When adding features we want to follow separation of concerns principle, going object oriented using polymorphism with inheritance. I want to remember I think Sam might even have talked about this in his contributor notes on the hyper ledger indeed node repo. It's important that you add a commit sign off. And it's essentially your name and email saying that you are contributing this to the project with, and that it's open source that it's that you don't retain any like ownership over I guess, I don't know I'm not legal expert on that. You can ask questions and chat. It's, but it's, it's, it's a headache to go and correct after the fact so remember it's just adding hyphen s to your commits and get on the command line. Don't code or develop in a vacuum, we're trying to be an active community features often start as hypes. There are hyper ledger indeed protocol enhancements, and then issues created in GitHub, where you can assign yourself and go and develop that feature. Often, if it's small improvements PR is engagement enough, but before going and doing something big. Make sure you're involved so we're able to contribute it back, meaningfully, and we're not stepping on each other's toes. All right, code exploration I think Daniel wanted to break the ice on the code exploration is this where you wanted to go ahead and duck ducks and then we can jump into some of the code. There are certain places to go look as you're becoming familiar with the repo. There's a setup dev markdown file. It explains how to set up your environment or prereq actually is essentially the first line of that document, and it actually outlines how to set up visual studios code. As in the prereq we tried to be more general and just have you set up Docker, but I highly suggest using visual studio code. I believe that is the preferred method in the dev setup. There's a troubleshooting document where if you get stuck that's a easy place to go see if there's helpful advice. There's also rocket chat. When you make a PR, you can go to the, I believe there's an indie or a hype rocket chat channel and say hey please review my PR. There's, as we just mentioned the code guidelines we went over the important parts of those. Historically, new features that come into the project have had a design document created beforehand. So going into the design directory, you can find a non creds. I believe some, some documents on the, the consensus algorithm view change and how that procedures work, as well as just opening up plenums docs on the consensus can help a lot. Specifically, the static validation, and at what point that's done during the consensus, and then the additional dynamic validation. And as Daniel, you'll spoke about earlier the update state method. We have all been waiting for we have no more slides. This is off the rails. Anything can happen. No holds barred. Daniel, do you want to break the ice. Yeah, let's go for it. Okay. So as, as Adam said this is the hands on portion but I will do a bit of a teaser. So after we go through the hands on portion. We will be talking about the future of indie and talking about some of the exciting developments that are going on right now within the indie community. So, even if you're not planning to follow along with the code which I highly recommend. Definitely stick around for that discussion at the end here. As Adam said, I am going to break the ice, so to speak, and, you know, remove some of the fear surrounding the code base hopefully and get you comfortable with making changes within the ND node project. So I have pulled up here. This is inside of the ND node project. The ND node package. And then within server. We have request handlers. There's quite a few in here. We'll start off looking at the domain request handlers. And then the name handler found within that. That's what I've got open here. And I'm just going to do pretty high level just walk through what some of the code is doing in here. This could take a really long time if we drill down into all the details so I might go down a path a little bit and then backtrack just a little for the sake of getting on with the actual hands on portion of this workshop, I guess, so. But here we are. So this is the name handler as implemented within the node. We see that it is implementing the or extending rather the handler from plenum. I showed this briefly, but I'm also going to jump to the definition here and just briefly show that it does differ jumping back does differ from what is implemented within node. So to call out at a high level, the methods that we see in here one more time. We have the static validation. And this is again the validation of the transaction request itself. We have additional dynamic validation, which is validation of, you know, does this operation make sense. If I were to apply it to the current state. Got a number of helpers in here that are sprinkled throughout. And then the update state is the main entry point for the actual writing process that takes place. I'll circle back around and talk through each of those in a little bit more detail. So in static validation, we first validate the request type. If I jump to the definition of this method. All it's doing is taking the request type or the transaction type out of the request and then comparing it to what is declared to be the transaction type of the current handler. Jump back one more time. And then making sure that as we arrive at the NIM handler is this request actually a NIM request. And if not fail, we extract data from the request identifier corresponds to the submitting the ID of the transaction, the wreck ID or request ID corresponds to a unique identifier for the NIM. And then the operation is what is actually taking place during this transaction. We go through we pull out the role, we make sure that there is a NIM present within the operation doesn't make sense to perform an operation on a NIM without specifying that name. And then we also make sure that the role associated with the NIM and the request is a valid role. This is not an authorization check. This is actually just checking is this role within the set of known roles and failing otherwise authorization occurs during the dynamic validation step. Let's all move on to now. So we've got a redundant request type check. We pull out that operation again. And then we recall from the ledger, a NIM to see if this name already exists on the ledger or not. If it does that indicates that it is a NIM being updated by this transaction, potentially. And if it does not, it means that we are publishing a new name to the ledger. Those of the validation here get a little nuanced, but I'll do a quick scan through and see if there's any points that I'll call out briefly. Yeah, it goes on to check the authorization. If it's an existing NIM, then we make sure that the operation is just updating a very key as opposed to, you know, attempting to target a different NIM or something like that. And now we'll move on to the update state. So again, a redundant transaction type check. We retrieve the NIM out of the transaction payload. We determine if it exists already. And if it does we pull the details associated with that NIM. And then we go through the process of determining what the resulting state is after we apply this this transaction. So we update a new empty set of data with stuff from the transaction. And then update the existing data with the new values that we just associated with it. The value is then serialized. I'm going to step into this one real fast. This is more or less just performing a JSON and code. And then we determine what key will be used to reference this NIM from the ledger in the future, which if I take a glance at that is just taking the hash of the NIM value and using that as the key in the ledger state. And then the state is set. And behind the scenes, this is actually taking care of all of the commits and, you know, updating the try and all of that for us. So just with a simple self.state.set, we are able to add this NIM to the ledger. So that's a right request. I'll now take some time to look through a read request. So, since we looked at the NIM transaction, I'll look at the corresponding read transaction for NIM. We have here the get NIM handler, which inherits from read request handler, which is defined in plenum as we discussed. And it has the core method here of get result. So this will likewise do a request type validation. This could have technically been performed in a static validation method instead of directly here, but it is just folded into the get result. We pulled out the target NIM from the request operation. We construct the path for that NIM in the state. And this I believe corresponds to, yeah, it's doing more or less the same operation as the determining the key for the state was previously just using a different helper for that. And then we retrieve from the state the path identified from that value that we just determined. And then, if something was returned, we populate some values. If nothing was returned, we insert none instead, and then we construct a transaction, a read request result from that data that we've just pulled out and then return it. And it's pretty simple, I think. Again, we can get deeper and deeper and drill down further and further into the code and what's happening happening in the background, but at least from a high level I hope this gives a good overview of the different components and pieces that we see here. So we will be looking at this in just a moment. I also wanted to take a moment to look at a particular file within node here. So inside of the indie common package. There is a types file. And this is where the schemas for these different transactions are actually being defined. So the git name operation will have a transaction type associated with it and a target name. And that's it. I'm not sure what that is off the top of my head so I'll go on to the schema field instead and look at that schema get schema will have a schema name and a schema version. Optionally, I believe, and an origin origin. So we can say which identifier wrote the schema and use that to narrow the search for the schema. And this is what is expected in a git schema request. Let's go to one that's actually writing something. Yeah schema field. So a similar set of schema attributes here schema name version. Then we have the actual attributes associated with the schema. And this is a set of things of limited length with a minimum length of one and a maximum length of schema attributes limit size. Yeah. So, these fields are aggregated down below let me skip ahead to that in the client operation field. So for each transaction type that exists. So we're getting the schema with it. And so we can have one central field attribute that we can use to check the operation type for given transaction. And we will be coming back to this in just a moment without him in the hands on. I think that it might be helpful that we've now that we've just reviewed the nim handler, and we've seen the different steps that it takes to write the transaction. I've got a diagram pulled out here that I'll share real fast. I'm going to do a bit of the screen share juggle. And take a look at this diagram here. Hopefully that's a decent size for you. So this is a diagram depicting what is occurring on a right request received. So the request is first parsed that static validation step that we just discussed is checked goes on to do signature verification. And when it comes to how the request is processed within the RBFD algorithm. Dig too deep into the details there. Mostly just pointing out stuff that we just looked at here. And then this is where the dynamic validation occurs. If the transaction should be fast. If it doesn't pass then the request is retracted very early in the process before incurring, you know, all the effort of checking whether the transaction is already on the ledger or if it needs to be propagated and so on. Once the dynamic validation has passed it finishes ordering on the node and commits to the ledger and then replies back to the client. And this diagram is available in the in the plenum read the docs underneath plenum diagrams under right requests. Since we're here figured it's also a good time to look at the read request that's quite small. Let me see if I can zoom a little bit. I didn't do what I wanted. The read request has a similar flow to it. Except it doesn't have the dynamic validation process, but it enters parses the request. Get transaction. Yes. Transvestite validation. I'm not exactly sure what this branch of it is so I will direct you to other documentation for more details on what this check is. In the base case I believe where we'll be hitting the static validation and then getting the reply from the state try and state proof is associated with the object that is retrieved. And any other associated verification cryptographically that this transaction was actually written to the ledger is included in the reply. Daniel. Yeah, I believe that get transactor transaction branch is checking whether it's a right or read request. I see. Okay. Interesting. Yes, sorry, continue. No, yeah, so that's about as far as I was going to take it before handing it off to Adam for our next steps. So any questions I can answer for this portion. Before we start digging into code, I'll be watching chat just in case. Not seeing anything immediately popping up. So AJ brings up the question, are we also going to be talking about in the SDK so for this hands on portion will be focusing on node and development for node. And the SDK was talked about a little bit earlier in our presentation. That's as far as we're going to take it for this one. And definitely ad hoc direct you to more resources on the SDK for those who are interested. Okay, I'll hand it off to Adam. All right, you can see code right. Yep. Fun size might be a little on the small side, if you can increase that at all. You've destroyed me. No. Yes, thank you. So let's say you want to do something like put a GitHub profile picture on the ledger. You shouldn't do this that would be considered private information and would violate the TAA. But if you're running your own private ledger and wanted to do it. Well, why not create a or update the NIM transaction to actually take that image. That's the, that's the idea around our, our on hands programming. So we're going to update the NIM transaction to take an image parameter that will start with the state and also allow you to retrieve it from. The NIM will not only be keys, but also an image referring to the handouts. I've made quite the cheat sheet. And out of benefit for you guys, I'm going to try not to actually cheat for the first few minutes. To get your bearings in the repo. I believe I am current with the prereq. And I have tests passed, they run a little slower on my machine. Some of the directories I was talking about earlier are in the root data design dev setup. These are scripts in the docs is where you can find some of those other files. Indy common is where we're going to find the types. Daniel just went over some of this stuff. So this is a little bit redundant. Let's jump straight in. So I'm going to interrupt real fast and just briefly remind people that we're working from the Ubuntu 20.0 for upgrade branch. You might have some difficult difficulties that you're not running from that branch. Absolutely like raise your hand if you hit that I have my hand raised it's yeah sorry about that should be bolded in a few places. This is like the docker container we're using is brand new I think it was merged a week or two ago like this is hot off the presses. We're using so. Thank you Daniel. I'm jumping in. Following the guidelines we should create a test. Inside of the tests. Let's If we go into any node servers. And then the request handlers. We will find. I guess it's not in servers. Inside any node there's a test directory inside the test directory will find a nim transactions. And there's a few that are like there's only two tests in here I think that do very, very basic stuff. But if we go into test nim additional. There's quite a few tests in here that will be able to borrow logic from and be able to extend. If we go to I think I copied. Look at my cheater notes around 90 line 90 is where I decided it was best yes there's a test add nim. So we're going to create a test just Fred inside here and remind me Daniel what's the point of this test. The point of this test is to test whether our image attribute was written along with our name to the ledger. Thank you. When I envisioned this as envisioning like HTML image tag. So I have a abbreviated image to IMG so let's define test test nim image. I believe we're going to need looper. As well as a pool handle. Lover is an asynchronous utility. Do you know Daniel. Yeah, just gets a running event loop for a synchronously running the operation yep. And then these fixtures provide the the dids and writing methods. To actually create transactions and send them. I realize a little bit of background knowledge for every test I believe there's four nodes alpha beta delta. Charlie maybe on others several nodes that spun up a little tiny test ledger per se. And in that Genesis transaction that's given to create those. That test ledger. There's a seed that creates a steward D ID, and there is a seed that creates a endorser D ID. So if we go into these fixtures we will find the creation of a D ID with that seed. Just so we're able to create these tests more easily. And further clarify just a tiny bit more so by having those the IDs available to us we can ensure that our transaction has the correct authorization to be processed by the ledger. Yes. I believe that's what I intended to convey. From previous tests we can see that it returns a, what looks like a D ID, and possibly a ver key. I believe we only need the D ID, but we will capture the ver key. And the first steps, what are we trying to do, asking Daniel as a springboard. That was rhetorical say, we're preparing the transaction. Yes, we need to prepare. We need to modify. And we also need to. Let's go that far and get think about the next steps. Okay so looking at some of the other tests we can see the looper loop run until complete. We're going to need that to save time I copied from the the handout. And we will prepare an M request. We'll be using the endorser D ID. And I forget what that parameter is for. I don't think it's important we don't need to look at the important thing is this is going to prepare a nim transaction it's not going to write it but return the actual transaction. We are capturing that transaction and storing it as a request. And then we're going to add an interface that the SDK doesn't have to create. Right, since we're adding a image to the transactions. Do we need to go update the SDK. We could. But for the sake of simplicity we can also just directly modify the transaction in accordance with what we're adding to the attribute. And then we're going to go ahead and scraped the logo for hyper ledger and he covered that from notes. We'll just store it after and hyper ledger and the logo hyper ledger logo. And then the nim tree request is in Jason. So we will have to Jason loads. And then we will want to capture this that will convert the request from a string to actual Python map type, where we can then as waiting for Daniel finish my sentence. We can then update or modify the transaction to have. Once we have stored. So, from trial and error we've found and discovered that the transaction has an operation. And at the operation level. There is the Daniel do you have been to have the documentation open for the transaction. I don't think I might be able to pull it up but at the operation level there's a I think a desk a data. There's a few different things. We're going to be adding image to the transaction at that layer at the operation there. And then we're going to want to send and prepare and getting them we can see that there's a SDK sign and send prepared request. That's the exact same method and where I discovered it. I'm going to copy it from my cheater notes sign and send prepared request and we will provide the looper the SDK wallet for the steward. This is because the steward is in the Genesis transactions. We'll be able to sign and submit. And then the pool handle, and also the transaction as a string to make sure that the test is done running that the test ledger has reached consensus. There's this SDK get and check replies. This is used a lot before any assertions are made. So to do to follow standards. We'll throw that in. Now, we want to validate that that right transaction actually executed right. So we want to do something very similar to get nim transaction. We can copy the get nim and response. And then we can make a few assertions on the returned data. Then again, there again, we're going to get the name using the looper the full handle. The steward, and then the DID. This is the DID of the endorser who's modified. I believe this steward wallet. I believe with nim you don't even need to provide a wallet with retrieval but copy paste artifact. I don't want to change it live. We assert that the result has a data, and then that that data has an image, and that it is the one we provided previously. Something cool with visual studio code. This code. The test plugin will generate run test funds. Mine has not updated yet. And that's okay. So now we can run. I'm in the directory with the tests. Now we can run a pie test. That's our first single test. And. Oh, not our it's okay. You guys caught that. Okay, so this should fail. Daniel say something. Great. So to reiterate, but what's happening in the background here is we run this test. I already described it a little bit, but this is actually spinning up a test network with four nodes alpha beta gamma delta, I believe, and we are using the test framework in the fixtures is provided already within any node to interact with that test network. So just a minute, especially on Adams aging MacBook to spin up and run, but that's what's going on in the background here. I think it ran it before updated when I click save. So we've run it actually twice. And it passed. I don't think that should be green. Oh, no, this failed. Okay. Good. So, in summary, we have a failing test for adding an image to the NEM transaction. I'm going to jump into that's the most extensive part of the code, like writing the rest of it is just understanding what's going on. Daniel, do you have any comments that before I jump into updating the operator. So this question was actually raised in the chat a little bit ago but I thought it was relevant to what we just witnessed. So there was the question of whether any node has a good regression test suite associated with it to help validate changes. So, to briefly describe tests are extremely extensive within any node. We run tests at the root of the Indian node project I think there are 16,000 tests available for running so all use cases are pretty well covered and test coverage I think is pretty good. Those tests also get into a little bit of integration test level as well as unit test level. We can be relatively confident when we're making changes and adding tests to those changes that we've not broken other things by running that whole test suite so yeah. Absolutely, we'll be back at this test if there's questions. So, the way I wanted to approach this is adding the image parameter to the operation, and then going and updating the, the actual nim handler using the operation inside types. So when we navigate back up into any common types. Inside the types file way down low round 470. There's this client operations Daniel just showed us this a few minutes ago so this should look familiar to you guys. It extends plenum's client operation field. And so, there's not already currently a nim client operator operation. We must not extend it, as it does with schemas a dribs get a trip claim def. So we're going to actually borrow some code from plenum and extended tier, just a little bit more. And in anticipation we'll add a nim operator. I've changed the layout of my keyboard and love life ever since. Oh, I'm going to my cheater notes. This client operation field, I believe is called within a bootstrap method. Is that right Daniel. Yep. So the client operation field is used to validate all client operations of the transactions coming in, and it will select the appropriate transaction type and pull the schema associated with that transaction so yeah. Awesome. I don't have much more to say on this. Let's add this client nim operation. If we go look at if we go look at the so we're now in the Oh no we're not. Here we go. We're now in the Plenum dependency package. We can find that the client nim is in here somewhere. This is it right. Yep. Okay so we're going to borrow that code. We're going to put it somewhere around line 72. A nice home for it because it's right next to get transaction. I don't know why that's a nice home for us over making a home for it. So this is the generic nim operation fields. We're going to. We're going to add our image. I have a plug in that will automatically manage my imports. This should spit up a whole lot more red than it is, I believe so. But all we've done is in addition to a roll target nim, a very key Nailias transaction type. We are adding an image so we can put it in here. So the image is from GitHub. Maybe even an image of your face, your profile. Which is. I'll call out as well. So we've made it a limited length string field. So we're anticipating that the image will either be a URL or, you know, something that's encoded into a string in some form or another. And it's limited to the raw field limit. So this is a generic size limitation. And this raw field limit ends up being. It's like five times 10 to the 24 bytes. So about five. I think that would be technically Kiva bytes. So. And this is a generic size limit that's placed on a lot of different places within the Nd node transactions fields. So. Awesome. Awesome. From. Plenum constants to believe. We add. Looking at the cheat sheet. Is alias already added? Which ones are we missing? I was in the wrong document. You guys can catch that. There we go. And we redeem ourselves. We find the proper home in our types. Right next to get nim operation. Just right below it. It holds barred. This is awesome. Okay. So now we can fix our imports. From the constants. We borrowed. Alias and. Verty. From. The fields. We also borrowed a Verty field. And the desk nim field. From the config. Alias. Field limit. So. I believe there's. Oh, we need to actually define one. Define two. Import them. Oh, only one. So from our constants. We need nim. And then. Image. Which we have not created yet. And that will give us. All the imports. For our new nim operation. We also have a new class. Which defines our extra field. Is being called. Inside a client operation fields. Extending. Plenum. To have our. Our feature. I believe I'm out. I'm about to jump out of this. File. Daniel, do you have anything to add? Nope. That's everything I think. Yes, I think we're on track. Wasn't worried at all. Okay. The. The image. We're going to add that to our constants. We can find that inside of Indie common. Constance. Also, if you're developing and you're just looking around. You. You could possibly jump in. Visual studios thinking. Okay. So inside constants, we have a whole bunch of things to find. That are used in multiple places. In the tests, I want to call out. To get around the dependency so we could just run. I did not use constant. Also, I'm not sure if best practices would have you use a constant. I'm not sure. I'm not sure. Lynn knows best practices. Daniel is very knowledgeable. Daniel, do you have a. Within the tests, it seems to be acceptable to not use those constant values, but that would of course be. A pain if you ever did change them because you'd have to change the test. So when making changes, that's something to keep in mind for sure. But there's, there's less. And that's the constant in the tests. Thank you. I agree with that. Here we go. I have added the constant for image. If we jump back to our types. We should now have. Oh, it's thinking. Oh, catch up. We don't have to wait for it to catch up. We have now defined our new operation. For NIM with an image tag. And we have extended it. It's time to jump into the actual handler. So if we go into. Indy node. And look at server. Inside servers, you will find request handlers. And there's request handlers for each sub ledger. And then there's a read request handler. If we go into domain request handler. We can find the NIM handler. And in review. As Daniel went over this, the NIM handler is extending plenum plenums handler. We can actually go. Look at that. The NIM handler extends a right. Request handler. Which has a mandatory static validation. And additional dynamic validation. With updates state. We won't. Go into that code anymore than that short review of what you just learned. So inside the NIM handler. There's nothing we need to change for static validation. In the additional dynamic validation. We don't need to change that. We don't need to change that. Just to briefly call out. You're looking at plenum still, by the way. Am I. Yep. There we go. Thank you. That was bad. I'd be the second time. Here we go. On track. Static validation. We don't need to update. Dynamic validation. We could update. If we had restrictions around the image. We're not going to be. We're not going to be that. And so all we want to do is store the image. So we're able to retrieve it and create it. With the only limitation being that. Raw data string field limit. And so inside of the update state, which. The icebreaker we went over. We can see that there's checks for different things of the. Operation the transaction. We should now have access to our. Our image. Inside of the transaction coming in. So we can say if. Image. In transaction. So if the transaction is actually there, we want to. Store it, right? Otherwise there's nothing to add. Right. That just added the. Import. From constants of image that we just created. Okay. So if the image is there in the transaction data. Then we're going to store it. Processing will occur to where. The state is set with the key and value. And it will have our image in it. And then we're going to store it. There's nothing. There's nothing else. I believe we need to do any final. Comments on this directory before I. No. Looks good. Okay. I'm going to make sure I have everything. Saved appears I do. We can come back to this test. Review that we're. Preparing an M transaction. Adding an image to it. And then we're sending it off. And then we're retrieving that name back from the ledger. And checking that the image is there. Any comments while this runs Daniel. Yeah, I'll call out. So. There's an effort on going in the community right now for did Indie method. Which adds an attribute onto the name for deduct content. And then we're sending it off. And then we're sending it off. So. This. Hands on walkthrough is actually pretty much inspired by that effort. And we found it was pretty interesting that. While in the node might seem intimidating, especially on the surface, it was. It's actually pretty straightforward change to add things to something like the name transaction. So. Yeah. You can add. Functionality. And then you can add and you can add. Indie node and have a pretty significant impact, I think. On the direction of the indie project. So that's, I guess, my plug for getting involved. Node is served us well for a long time, but that doesn't mean it's perfect. So your contributions are more than welcome. Thank you, Daniel. If you look at step six of the handout. Celebrate the past. Yeah. We updated the operation to have an image tag. Or a parameter. We then updated the handler to. Store that image. And we successfully tested that it's stored and retrieved. So celebrations, right? Cheers. You can find the complete set of changes linked in the handout as well. A few. Yeah. Accidentally fumbled a line change or anything and you'd like to view the final state of things. You can find that in the handout. Yes. Absolutely. And I mean, I made a few mistakes in there and I had a cheat sheet. So. As you're joining the community. Remember, there's the troubleshooting and then you can join. There's a working group you can join and make noise in and. Yeah. I believe that's all I. Had for the hands on. I'm forgetting something Daniel. I think that's it. Awesome. With the remaining time. I believe Sam and Daniel once again. Are on the hook for discussions on. The future of Indy. Well, this helps if I unmute. We sure are. We want to talk about the future of Indy here because it can give us some. Some insight into sort of what's happening. What's, what's coming up as new and, and what we can look forward to. So. So we wanted to talk about some, some concepts as it relates there to look forward. I want to look back a little bit. Indy pioneered kind of a lot in the identity community. And, and among others, it was the first production deployment of a, of a sales signature based credential. And, you know, we avoid all of the issues around proof of work and, and extraneous energy, et cetera, because of a structure. It's the only identity focused ledger that we've got. There's lots of ways of doing identity on other ledgers. If you go look at did methods, then there's, there's lots there. But we've got one we don't need to be concerned about the various uses of our ledger, because it is focused entirely on identity, which gives us a pretty huge advantage. So I want to talk about a couple of things with the future of Indy. The modularization of Indy SDK Indy in a multi larger world that they did any method and a non cred standardization. This is going to be a little quick and we don't have the full time to go into all of the details here, but can definitely entertain some questions. And, and your effort, of course, is invited. If you would like to get involved in any of these things, there's, there's open efforts going on on all of these, which is, which is pretty great. Daniel, is this me or is this you? This is me. So perfect. We talked about this already during our brief discussion of the Indy SDK future of the Indy SDK previously. So this is more, more review that this is happening. Than anything else. So the Indy SDK, like I said, has been serving as well for a long time. And has been a foundational piece of the areas ecosystem as well as being crucial to the Indy Indy ecosystem. That being said, we've discovered, you know, that it would be really nice to have these modular components that we can pull down and use independent of the other pieces of the Indy SDK. And so that effort was started some time ago now by the government of British Columbia and their team over there. So a huge shout out to all the efforts that they've put into these separate libraries. So with the Indy VDR, we have ledger communication independent of the original SDK libraries. We have Indy credx, which is a non creds libraries for constructing those objects. Again, independent of the original Indy SDK and Aries Asgar as a secure wallet storage solution. Again, this time in hyper ledger areas, but independent of the original Indy SDK wallet. Thanks Daniel. And again, just a review as Daniel talked about that earlier and the efforts going on there. So that's modularization of the Indy SDK. The other thing I want to talk about is the difference between one ledger and a lot of ledgers. We began with a single ledger oriented sort of as a utility and there became a lot of reasons why different Indy ledgers were spun up and are under use. And so an early one, for example, was Finland. We lovingly call Fendi. And they and the government of Canada, et cetera. And as it turns out, there's a whole lot of reasons why particularly countries are interested in running different different ledgers. And part of what it has to do with various regulation that requires solutions that government is involved in to not to depend on foreign interests. And the while a very distributed ledger solves lots of those issues, most regulation hasn't really caught up with the concept yet. And so we ended up with not just one Indy ledger, but a lot of Indy ledgers. And so this is a changing world that we find ourselves in now and several of the other items that we're talking about relate specifically to this sort of new world of lots of ledgers instead of just one. So which ledger do you use? And the answer I think is a little insightful to how you're thinking about ledgers and the role that they play. So every ledger offers services and you want to use the services from any and all of the applicable ledgers that make sense. For example, Indy does not contain the general transfer of money. That's like not a thing and Indy ledger is going to support or will likely ever support from a general purpose perspective, right? So but that's what Bitcoin does. So there's nothing wrong with using Bitcoin to transfer Bitcoin and using an Indy ledger for the identity components. You're using a service from each of them. And I think we'll see the evolution of more ledger services. NFTs are an interesting example of a type of service that a ledger can offer. And as useful applications that technology present use cases, there's nothing wrong with using ledgers with the services that they actually provide. So the answer on which one do I use is you want to use as many as are useful. And we're moving into a world where the software that you interact with will be able to reach into whatever ledgers actually make sense. So it's a trick question because there obviously isn't one. But that's the concept that I wanted to bring up is that the goal of Indy is not to be the one ledger to rule them all. It's to solve its very specific problem for identity in a way that allows it to be joined with the services from others in a really useful and powerful way. So composition here, not really adoption of all of those various technologies. So to aid that effort, one of the things that's happening here is the Indy did method. Yeah. So kind of hearkening back again to where we started with Indy. So Indy was around well before the W3C did specification came into being. And as a result of predating that specification, there's some terminology and ways that we handled things back in the day that don't map perfectly onto the specification as it stands today. So the government of British Columbia put out a code with us bounty to bring the Indy node and Indy ledger up to date with the did specification. And on top of that, also make it possible to reference multiple ledgers, multiple Indy ledgers out in the world today. So where previously there was Ditsong, which was used for referencing the IDs on the sovereign ledger. There are, of course, as we just discussed, multiple variations of the Indy networks out there. So the Indy did method provides a mechanism for specifying which network on top of returning a did core spec compliant DID document on retrieval of a DID. So in DCO was awarded this bounty and we are super excited to be working with BCGov on this and this is currently under active development. We are scheduled for this to be hopefully all merged and buttoned up towards the end of March. So to highlight some cool stuff that Daniel just said, the did Indy method isn't, you don't have to make your own if you're running an Indy ledger. The did Indy method can be used with any Indy ledger. Furthermore, it allows the referencing of between ledgers of certain objects. For example, if you had a schema on a different ledger that you wanted to reference, you can do so. There's some really powerful things coming there that will add a lot of power to what's going on. And so this helps substantially when you have an environment where you might have for regulatory reasons a ledger that is different and this enables wallets to easily manage and use the assets of any of these ledgers without having to be locked to a specific one. And this is a really nicely designed way to do that as a DCO we're excited to be involved here and grateful for BCGov support. There's another effort that relates a little bit to this. There's an effort underway to standardize the non creds, CL signatures based credentials as a standard. This is basically an effort to document and standardize the de facto non cred standard that exists today and is active in using Indy ledgers. One of the complaints against the non creds is that it specifically requires Indy while we happen to like Indy. We love the concept that a non creds could gain further traction. And so one of the changes in the non cred standardization effort will be to remove the explicit Indy ledger requirement. Indy remains a great thing to build a non creds on top of, but other either ledgers or other immutable data stores can be used in place of Indy. The good news here is that it opens the door for wider use of a non creds and then eliminates mostly silly but insistence on not using Indy. And so some of the cross ledger reference stuff that is being worked on as part of the did Indy method will play a role here. The work for the non creds standardization effort is being pursued at the ITF and will predominantly involve documentation, although there is some very small changes to make to the formats of referencing credentials and the various objects that they require such as credential definitions and schemas. So that's underway. And if you're curious to that, reach out. We can point you to where the meetings are happening. This is just barely getting started. We're going to have another pre-meeting this next Monday. Stuff's happening real quick. And so if you're interested, then please reach out on RocketChat and we'll get you in touch with our people to participate there. So part of the question is why this is a hot topic that I get asked frequently and fast. And in fact, during the training session here, I got asked by another party, not attending, but very interested. So we were hopeful that the privacy features of JSON-LD BBS plus signatures would reach parity with the things that we have in the non creds. That has not happened. That's not been abandoned, but it certainly isn't on the fast track that we were hoping for. CL signatures based on non creds are already widely deployed, which means that seeking that as a defective standard is a useful step towards whatever ends up getting developed in the future. And then it could possibly also set the stage for a future version of an on cred where we don't have to give up the privacy features that we have, but we can use newly developed crypto and other sorts of things to continue to innovate on top of that standard. A future version of an on cred is not a soon thing. That's a much longer development effort. But seeking the defective standardization of what we've already gotten in on creds will be something coming much faster. And so that's a lot of stuff. I just realized the chat window wasn't up. And so I have no idea. Oh, I do have it up. Yeah. So if you have any questions also, we'll be hanging out on rocket chat and we can answer questions that are pointed to the appropriate resources. And that's a little bit of look into what's coming for Indy. Lots of development and changes that will both make it more powerful and more flexible. And with that. Thank you. We are nine minutes early. And here's here's where we are. If you if you have any questions or if there's things that we can help you with, please reach out on the community channels. We're grateful that you've attended here with us today. We love Indy and we hope that we hope that you do too. And that you you have some time to both be able to make sure that you're doing the right thing. And that you're doing the right thing. And that you're developing enough possible for spending community. All of those things are. Our, our open and welcome. And we hope you take advantage of them. So thank you. And I hope you have a fantastic day. All right. Thank you very much, everyone. Thank you to all the presenters and all those who were able to make it. And we're here until the end. As mentioned, this recording will be posted on the hyper ledger workshop. And if you have any questions, feel free to reach out to the DCO team. We're happy to help answer any questions you have or direct you to resources. Thanks again and hope you enjoyed. Great work in DCO team. And thank you for all the attendees for the amazing questions. This has been great. Sean, I know there are multiple requests about the slides and about the, the previous Aries workshop as well. Do you have any, any update on that in particular? We are getting an email out to everybody who's at this workshop this week. And that's going to include all the links and a link to the video. And we are going to work to send out a similar email probably on Monday for this workshop, which can, which will include a link to the slides as well as a link to the video. We want to get it out as quickly as possible. So we'll get that folks. Keep an eye on rocket chatter. Keep an eye. There will be an email going out to all the registered participants as well. Hey, thank you for the support of this whole thing and for, and for helping out with those materials. No worries. James, I'm going to turn off the stream. Okay. Yep. Sounds great. Thanks.