 Great. All right. Welcome everybody to the Hyperledger Foundation Workshop Hyperledger Aries Framework JavaScript 0.4.0 release setting up an agent initiative and credentials. Thank you all for joining us. This session is being recorded. It is going to be and it is being live streamed on the Hyperledger YouTube channel. I'm going to post some links in a second in the chat once we kick this off. But I want to thank everybody for joining us. As with all Linux Foundation meetings, this is held under the Linux Foundation antitrust policy. If you have any questions about these matters, please contact your company council. Or if you're a member of the Linux Foundation, feel free to contact Andrew Uptegrove of the firm Gezmer Uptegrove LLP. I would really like to thank the three organizers of this workshop, Ariel Kareem, Ariel Kareem and Bernd. All of Hyperledger is powered by contributors and maintainers and users of this software. If these folks didn't exist, we wouldn't exist. And we cannot thank them enough for sharing their knowledge and quite frankly, all the hard work they've put into Aries Framework JavaScript just in general, but also to get us to the 0.4.0 release. And with that, I'm going to stop sharing and turn it over to Kareem. And I'm going to start putting some links into chat. Kareem, it's all yours. Perfection. Thank you, Sean. Bernd, that's very much been here. Okay, that's working. Cool. Well, welcome everyone. And thank you for joining. As Sean said before, well, my name is Kareem. And I will be giving this presentation together with Ariel Gentil from 2060, also a maintainer of Hyperledger Aries and Bernd Schumke, who works at Animal as well. Let me see if this works. Nope, that didn't work. So, over here for what we're going to do, I'll start off with a small introduction. And as this is about the 0.4.0 release, specifically, we will have an overview of the release, where after we will, Bernd will show you a little demo that he has prepared. After that, you will show you a few code examples, like how does it actually look to work with the 0.4.0 release. We'll have a short look at the documentation, a look at what is on the roadmap and what's next. And at the end, we will have a question and answer session, which we did this last year, a similar session for the 0.3.0 release. There were a lot of questions and we had to cut it short. So, we expect to have some or to spend quite some time on the question answering session, so we can answer all the questions you might have. So, first of all, introduction. Stances are hyperledger, well, a global hyperledger meetup. I want to quickly touch on what hyperledger Aries from JavaScript is, what it isn't. Sorry, I see that I don't have my camera on. So yeah, a quick introduction on what hyperledger Aries from JavaScript is. I will not go into details of self-serving identity and all the, well, yeah, all the things and concepts around that. If you are interested in that, then I urge you to look up the previous session we did on the 0.3.0 release, because that goes more in depth into what self-serving identity is in general, but this is really more specific to the 0.4.0 release. So, but summarized, I need to quickly go over it. Aries frame with JavaScript is a framework for implementing self-serving or decentralized identity solutions. I use both terms because I think decentralized identity is the more popular term nowadays. It is based on hyperledger Aries standards, which is a collection of RSEs or standards that are implemented in multiple languages. You also have Aries Cloud Agent 5, for instance, but this obviously is the JavaScript version of it, and which will become a little bit more relevant a few slides ahead, but hyperledger Aries was born in the hyperledger in the project originally. So, because it's a JavaScript framework, the cool thing I think what is cool about it is that it's a multi-platform solution. So, that means basically you can run on the server side through Node.js, but also mobile devices if you use React Native. So, this is the typical, again I'm not going to go into the details of it, but this is a typical SSI diagram. I'm sure you have seen if you are somewhat familiar with SSI. They have in the SSI, which self-serving identity. In that ecosystem, you have usually three roles. They have the holder, someone who receives credentials, a verifiable credential, you have an issuer, someone who issues them, and a verifier that is interested in verifying it. Additionally, you see in the middle, the bottom middle, I guess, the verifiable data registry, and that is often a blockchain, but it doesn't have to be a blockchain per se. As I mentioned on the previous slide, it is a multi-platform system, and the cool thing about that is, is that it can help you create solutions for all these roles, right? Because a holder in a lot of cases has a wallet, a mobile application, so for that you would use React Native, but an issuer and a verifier often run some kind of server architecture, but then again, you can use the Node.js runtime for those kind of solutions. It really covers all of these things, and we really do our best to keep the features that we implement to make sure that they run on all platforms. If you use Aries Firmware JavaScript, which I will, by the way, abbreviate by AFJ, a lot in this presentation, but if you use AFJ for, well, your holder, your issuer, and your verifier, then you should be, well, yeah, they should be interoperable automatically because it's the same framework. Next. Yeah, so let's go over the release overview, and the most primary thing is modularization. That is, we have put a lot of work in making the framework more modular, and that work already started before, so in the zero, I believe, zero, two, zero, everything was one big package, and in the zero, three, zero release, we already started, like, well, implementing new features in different packages, different module. As you can see, we have extended that work quite drastically in the zero, four, zero release, so I will quickly go over a few of these components, not all of them, because they are not called that relevant, but on the left, you see in blue, the core, and the core is just, well, as it has the core framework, it is everything you need to run a, well, to run the framework. Below that, you see two direct native package and the node package, the two in gray, those are basically the dependencies you would need for each platform, so if you're running, or if you're creating a mobile application, you would use the react-native dependency, the react-native package, and if you are running a server-side application, then you would use the node one. Next. So, this is just a copy of the right side of the previous slide. So, what have we added and what has happened exactly? I have put a little legend on the bottom, but in essence, we have some new functionality, which you see in the green tiles on the right. We have packages or modules that were already present, that's the purple ones, we have a package, the NDSDK package in yellow that has been moved out of the core to its own package, its own module, and we have shared components, which you see in orange, which I will explain now in the next slide. So, for those who don't know, as I previously mentioned, Indy or Aries was born in the Indy ecosystem, and Indy is a, well, right now, I would say it's a ledger, a identity-specific ledger under the hyperledger umbrella, but they realized the maintenance of the Indy realized at some point, hey, we need, just the blockchain is not enough, right? We also need software, standards, protocols for agents, so the issuers, the holders, the verifiers, they need to exchange information, they need to exchange credentials, and we need protocols and specifications for that. So, along with Indy, and I should say Indy node, which is a separate software component, they created the Indy SDK, and the Indy SDK, as it suggests, is the SDK that you can use, you can use to communicate with the Indy networks. So, an example that people might know is Sovereign, that's an Indy network that is used in production, and this Indy SDK, it did the, which you see all the way at the bottom, it took care of the communication with the network, no, the bottom, for first, yeah, that one, it takes care of the communication with the Indy network, but initially, it also does, it also takes care of storage, so the credentials and other, well, related data objects and models, they are stored encrypted, and that is also something that the Indy SDK did for you. So, it basically also provided secure storage, it provided cryptographic functionality that we needed, under the hood, it would call Ursa, which is another hyperlensure project, and it also took care of a lot of Anon creds-related functionality, and Anon creds is the new name for what used to be the Indy credential format, which is a specific type of format, how to represent prevention. A lot has happened in the meantime. Anon creds, the Anon creds credential format, so the Indy credential format, what used to be the Indy credential format, that got sort of separated from the whole Indy stack, and has been renamed to Anon creds, and now is really a standalone credential format standard, and in the meantime, we have, and that was already the case in the 030 release, I believe, we've added support for other credential formats, like the famous W3C credential format, and we have added support for JSON-LD back then, so a lot of movement here, and as a result of Indy being this one giant Rust dependency that does everything, basically, the result of that is that if you would have used every framework JavaScript, you just issue and or verify or hold or whatever, but specifically W3C JSON-LD credentials, well yeah, you needed Indy SDK to store stuff, so you would automatically also include the logic that is related to communicate to an Indy network, well, maybe you do not communicate with the Indy network at all, you would also need to include Anon creds, the logic related to the Anon creds credential format, while you might not use that, because you are just focusing on the W3C JSON-LD format, so yeah, you're basically pulling in a lot of code, you're making your package a lot bigger and it's necessary, and this is, yeah, that was not something we wanted to continue with, next slide yeah, and so, and this is not an effort that is totally or not even mostly done by us, but the Aries community, they chose to basically introduce three new components, Aries Oscar, Indy VDR and Anon creds, and these three components together can be used basically as a replacement for the Indy SDK, so they take care of, they each take care of a subset of what the Indy SDK is to do, and there are some other differences there which I'm not going to go into, because that is beyond scope of this presentation, but so we have a new component that is Indy Oscar, which takes care of storage and cryptography, then we have Indy VDR, which VDR is verifiable data registry, so it's really a component that takes care of ledger communication, and we have the Anon creds package, which is again, it's the standard itself, the credential format itself, so it's a package that takes care of everything surrounding that, and this slide, Ariel is going to present, Ariel, are you there? Yeah, yeah, no problem, yeah, yeah, so yeah, in this slide we are doing an overview of what's on the framework score, and how are we going to build an Asian, an equivalent Asian with 040, considering all these new modules that Karin mentioned, so if you look at the 030, we will have in the core the functionality related to Indy and ledger that has been moved out and extracted out and went to the Anon creds and the Indy SDK library, so a module, so if you want to build an equivalent agent, I mean with exact same features, but with 040 you will need to use in your project the core and also the Anon creds, which is a module that adds the main functionality and the credential format, and the interfaces for Anon creds, and the Indy SDK that provides the implementation that Karin mentioned before about the storage, the ledger connection, and the cryptography, but we have also the option of creating an equivalent agent, but using all these new shared components, so in this case we will need to use five different packages, the core, the Anon creds, Anon creds, RS, ASCAR, and Indy VDR, what will be the difference between both, actually I would say that it will be better to use the new one, I mean the one on the right because the new Anon creds libraries are a bit faster, are more flexible, you can also support for other ledgers if you want, and you can, and especially because it also supports the fully support the the new Anon creds specification, something that we will talk a little later, right, by the way if you if you look at the core you will see that there are still a lot of sub-modules, right, so you we can see that we have the bitcom, discover features, program report, out-of-band, routing, they are mostly related to bitcom messaging, and maybe if we continue on this with this philosophy of modularized AFC, we can in a near future extract them out as well, so if you want to build an agent that is not that will not need to support bitcom at all, you can of course do that, right, so maybe we can go to the next slide, yeah this is a consequence of removing Indy SDK from the core, previously we we had a default wallet which was the one that came from Indy, but now in order to set up an agent we will need to to provide a wallet class and interface implementation which will be done by the ASCAR package, but if there is room to also provide a different one, you can also implement your own wallet as long as it complies with the wallet interface that basically needs to implement some methods for opening a wallet, closing, bitcom message packing if needed, key storage and so on, and that give us the possibility of in a near future we we hope to create a wallet, a peer-shared wallet that could allow us to easily run AFC on a browser, which is something that I think most people from the community is expecting to have some someday, right, so we will talk about that later by the end of this session, so we can continue on the slides, yeah there are there are a few more differences also as a consequence of of removing the dependency on unencredged and Indy SDK was that previously we had some some properties on the Asian configuration like this public DID seed and public DID that was used mainly for Indy to create the objects and the ledger and that stuff and now we are not using that anymore, but we provide a way of importing bits that were created somewhere else and to also support if we want more than a public DID, this can be mostly interesting for those who are using AFC on the server side or in public entities, we can continue next slide, yeah and this is another feature that we added, this is not something related to the architecture, but it's something that we are using in our project, so I wanted to highlight it, is that we can connect to, we can not only connect to other agents by using connection invitations or out-of-band invitations, which is the classic QR code that we, I mean it's the classic invitation code that we render in a QR code, but also we can connect directly to a public DID, so this is what is called the implicit invitation, so you just put the DID and the agent will and you set up well some stuff like the areas that you want to use and the handshake protocol and the agent will do all the work needed to create a connection, to resolve that DID and create a connection to it and also and the same works from the other side, I mean if you have a public DID you will also receive the invitation, the connection request and you can decide if you want to accept it and move forward and that's it for this slide. All right, so yeah, so the next section about, beyond areas as it's called, so as you might have seen in the beginning, somewhere in the components, we have started work for implementing protocols that are outside of the typical Aries ecosystems, in other words, as I mentioned earlier Aries comprises a set of various RFCs, but the spaces evolving, recently this year, both the European Commission and the Department of Homeland Security have announced whether preferences or they have hinted on the standards they will start to adopt, nothing is in stone yet, but they are looking at a certain set of standards and they are not all Aries related. On the right hand side you see an example of this, the ARF, the ARF is short for the architecture reference framework which was published by the European Commission and I hope it's big enough to read, but this diagram shows a few choices they have made, so for instance the credential formats that they will use are MDoc which is an ISO standard for proximity flows and W3C verifiable credentials and then the STJWP variant of that for remote flows and that is for the credential format itself, but then we also have exchange protocol, so in the Aries existence we use DITCOM, mainly to, well, to DITCOM as a mechanism to issue credentials to present proof and to do a variety of other things as well, but the European Commission has chosen to focus on the, well, a bit at newer open ID for verifiable credentials back, so on the top left you can see that they for issuance have the open ID for verifiable credential issuance protocol and for the verification side of things it is open ID for verifiable presentations and side of treat too. All these details don't really matter in the scope of this session, but I just want to point out that there is a lot of movement in the space and there are a lot of other protocols and other ecosystems that are becoming more prevalent. So we really feel that there is a lot of value in adopting support for other standards outside the Aries ecosystem. For one it enables a broader range of projects that can be used in. The second point is a little bit forgive me for that, but what I was intended to say there is that it ensures the relevance outside of, well, yeah, let's say geographic areas where Aries is the choice. I know that Canada, for instance, really goes for the, for Anagrets, for instance, but again, in you other choices are being made, so this, yeah, extending the support to other ecosystems makes it relevant for instance, for in Europe, but all this modularization and adopting other standards, it really forces us to rethink architecture and a result of that is that it also ensures more longevity and it makes it more extensible. We've had to define a lot of interfaces which in the end, if other protocols come around, makes it just these architectural changes that we're working on right now, it's ongoing, but they ensure that it will be easier to add other standards later on as well if they become more relevant. And an interesting feature of this is, and one is already possible, is it good and how feasible that is. I don't want to make a judgment about that, but it could facilitate interoperability between different ecosystems and stacks, etc. So an example of that is which you could do now, I have not done it yet, but in theory you could get a credential issued over OpenID for verifiable credential issuance and then present proof that credential over did come. So that's an interesting bridging thing that it can potentially also bring. Yeah, so in other words, we envision, although it's every frame of JavaScript, it's a bit weird and it feels a bit weird to us, to ourselves as well, but we really think that every frame of JavaScript is like a sort of all purpose, such as an identity toolkit is the most valuable. There are a lot of different islands in this exercise space, but having a toolkit or a framework that can act as a bridge between all of those is really valuable, I think. The Swiss Army Knife, that was the first blog. So yeah, we are going to talk a little bit more about the, or more in details, about the lecture agnostic and uncrets, which is not something religious, but it's about how the indie credential format was evolved, especially last year when they became a standalone project, because we, the community saw that there was a lot of value in the uncrets specification and format, let's say, because of the unique properties that has this kind of format, but the issue for implementers was that it was highly tied to the lecture, which was something that was not very appropriate for every case, so the community started this standalone project where the idea was to standardize the specification because at that moment it was just part of the indie SDK code and almost nothing was formally specified, so this working group has written the uncrets specification v1 that was compatible with what has been done in indie, but there were a few tweaks to make it fully compatible with any kind of verifiable data registry besides indie, right? So, and this working group has also developed an implementation in RAST based on the shell components that Karim mentioned in the beginning of this presentation, that's called an uncrets rs, which has some wrappers for Python and JavaScript and of course we have implemented our package based on these wrappers. Actually, we developed the wrappers, but that's another story. I will say that I can proudly say that AFC fully supports these uncrets specification, sorry, there are some people from a KPI community, but as far as I know AFC is the only framework that fully supports these new libraries, but I put this star there in the fully because we don't have official support yet for revocable, for issuing revocable credentials. We can verify revocable credentials, but we cannot issue yet, but there is an ongoing PR, so it will be soon available. So, we can, by the way, we support both the new uncrets and also the legacy in the credentials. We can go on. So, I will talk a little bit about the, how these uncrets modules are architected. So, the idea is that we have a generic module that defines the basic API for uncrets, which allows us to create objects like the credential definitions, schemas to get from any BDR, all those kind of objects. And also defines the format, how the objects are format, and the interface is that the implementation have to implement to create the bindings with the actual crypto libraries that are actually doing the work of doing the cryptography, the SSA cryptography, and also to connecting to the appropriate lectures to get and create the objects. So, the idea is that we have, for an agent to work, there should be a module that implements these services for holder, for issuer, for verifier, and at least one module that adds what we call the registry, which is actually, it's an interface to the actual, to a given video. So, the idea is that we can register one or more registries in order to support one or more lectures. It's more, it's something like, it's something analogous to what we have in the DITS. We can support different DIT methods, so we can register different resolvers and registers. And in this case, we can support different kind of BDRs, so we can add, we need to add some more registers. So, in this case, we are mentioning the Indie and non-cred registry, which is implemented by the Indie SDK package, and also in the Indie BDR. And the checked non-cred registry that also is implemented by another module, which is the check B SDK that we will talk a little bit more later. We can continue, move forward. Somebody is asking me if I speak Spanish? Yes, I speak Spanish, as you can see from my accent. So, well, basically we support in our, in the main repo, we have support for the classic, let's say the classic BDR, which is the Indie. So, we support the Indie DIT method for non-creds, which works with both legacy and Indie DIT, and also with the new Indie DIT in the method. And the checked, as I mentioned, that was contributed by the check DIT, which was a major contributor in all this stuff about making non-creds that are agnostic. So, yeah, we can maybe continue a little bit more. So, yeah, the idea is that if you want to add some more support for other BDR, it's quite simple. You just need to implement the non-creds registry interface with the bindings for the BDR you are using. And if needed, because each non-creds registry is registered to a particular DIT method, you will need to implement the DIT resolver and DIT registry and add them to the DIT model. So, there are a few methods already supported or in development, under development by the community, which are the Cardano made by RootsID and the DIT web support that we are using, which allows to store and manage non-creds without the need of using any blockchain at all. So, we can put them directly in any web server and just use the non-creds library. So, for instance, in our case, in our application, we are not using the DDR. We are just using non-creds arrays. We are using the core and these bindings for the web. We can move forward. What's the demo? Berend. Berend, time. Yes, thank you, Ariel. So, I think everything what they just said sounds very cool, but we have to see it in action, of course. So, first, I will do a short demo using a wallet that we created at Animo. It is built for a Dutch project, which tries to create a minimal set of requirements, as like a subset of the ARF, which is mentioned. So, the ARF still leaves some like holes in their first implementation or document. And we decided to fill in those gaps, make some decisions, and then we created a wallet for it together with other companies who also contributed to the project. And I will give a quick demo of this wallet right now. Let's see. Yeah, it should be here. So, here we have a very simple login page for the Dutch Blockchain Coalition. And we can log in with our wallet switches on the left side. I think we can give some links to the wallet so you can play around with it yourself. But basically, this leverages the open ID for VC specifications. So, we use OpenID for VCI for credential issuance. CYOP and OpenID for VP to present these credentials. And this is just a very nice, simple login flow. Again, very demo environment, so no additional security is provided. Here we can request a credential. So, I'll just, upside this, not mobile friendly. I will fill in my information here, email, and then I'll click share. And then we can see the two wallets that are available, which is paradigm from us and the serial wallet, which also does the OpenID stack, but with a bit more functionality as well. Here we see a QR code. Hopefully, I'm the first one to scan this. And here we can see the credential. We also have a little display here, which is all from the W3C JSON LD specifications that we get this. And this is the content of the credential, which we can accept. And now I have three attendance credentials, which might be a bit much, but it's a demo. And then we can go back and use this QR code to log in, which basically does a proof request of the credential, which we can see here. We want the first name, the email, and ID, last name, and the event. And if I click accept, it will share it. And I am logged in right now, which can be seen here. So, yeah, I can see here I am logged out. Again, very, very much demo environment, fun to play around with. But it shows this specific demo quite shows very well that the OpenID for VCI group of specifications is very, very simple. It builds on top of a lot of standards that already exist. And yeah, it removes quite some overhead to map, to issue credentials more quickly. And I'll show you that right now. So, if I go to the slides, from a holder perspective, this is basically all the code that you need, basically with a massive S-tricks, because of course you're set up with state management and everything. We're not going to go into that. So, we can set up an agent and we add a new module called the OpenID for VCI client module. This is specifically for the wallet side, which in OpenID for VCI terms is the client. And we also need a wallet here, but it's all left out, so we can make a nice little example. We're not throwing a lot of unknown modules at you. And right here below, we initialize the agent quickly. And then we call the methods request credentials using preauthorized code. So, with the OpenID for VCI specifications, you have a preauthorized code flow and a authorized code flow. For this demo, we rely on the preauthorized code flow. Here, we just pass in the data, which we scanned. As the issuer, we're given a couple of additional fields. Also, the proof of possession verification method resolver. We love our long names. It's left out to keep it a bit shorter. But here, you would basically go to a verification method in your document, which will then be used to sign the credential using that verification method. But that's all nice. It's a nice demo. It looks all cool. But I think a lot of people here are actual developers. So, it would be good to also go to some real codes. In this demo, which is available at this link, we will actually use Ditcom and Anocrats. And we will establish a connection between a holder and an issuer using the INE SDK. Then we receive or re-register a DID for the issuer. Then we register a schema and credential definition. Afterwards, we offer a credential from the issuer using the new Anocrats model. Once that is received by the holder, we actually migrate the holder to the new shared components. The underlying database structure for INE is quite a bit different than for Oscars. So, we wrote a custom migration script, which I think we will touch upon more later. But the gist of it is so you can reuse your INE SDK agents, credentials, and everything that is stored in there with the new shared components. Then afterwards, at the last step, we present a credential to a verifier with the newly migrated agents to show that everything still works as it's supposed to be. I will just slide to the code, which is open right here. I hope it's big enough. I can make it bigger if it's big enough for some people. So, we'll just quickly go through the code and see what it's doing. Again, repo is available afterwards. So, if you have any questions about how some part of it is done. Perfect. You can look at it there or ask it in the Q&A after. We have some time for questions. So, first, very simple. We initialize an agent, which is the holder using the INE SDK. We can quickly look here to see all those modules that we talked about. We have to go a bit up, sorry. This is for the INE SDK. So, here we registered the INE SDK as our wallet. We registered the Anocrates module, and we used the IndividuR Anocrates registry to resolve the Anocrates objects. Here we have the Anocrates RS module, which is a module around the Anocrates JavaScript wrapper, which is, for Node.js, it uses just pure FFI bindings, and for Reignative, it uses circle modules. For DITS, we only need the in the DITS resolver as the holder. And for the IndividuR, we supply a network, which is the PC over in test network, which is just a simple genesis transaction. And some information, like if it's a production network, what is the INE namespace, and whether we should connect to it on startup. For connections, we supply the module again. It is provided by default, but if you want to modify the configuration, you have to supply the module again with the new configuration. So here we want auto-accept connections to true, which basically means that if we want to establish a connection, we don't have to go through the entire process manually for multiple steps. Most time users just want a single, like accept or decline. And it helps a lot with that. Below we have the credentials module, same configuration as for the connections. We can auto-accept credentials. This uses ALWAYS, which is strongly not recommended to do in a production environment, but works very well for demos. This is the same as auto-accept connections. Basically, every credential that comes into the agent, we just accept. Doesn't matter what the content is, if it's true or not, we just accept it. Works very well for demos, definitely not recommended for production environments. And below that, we register some credential protocols, which is for now only the V1 protocol, which uses the Anocrates credential format service. And right here, you would specify like a JSONLT credential format service or a legacy in the credential format service to support all of them. But for this, we try to keep the modules as minimal as possible. So that's the holder. And we initialize it. And with initializing, you initialize the wallet, you set up the transport. I think people that are familiar with AFJ know exactly what it does, but it's just setting everything up for you. Right now, as some of you who might notice, who migrated to 040 already, is you have to create a link secret now. In the SDK, I think that under the hood, now we have to do it manually as a user. I think there is a pull request open to also automatically use this for you. So this step will probably be just a configuration option. But very simple here, we just call straight link secret on the Anocrates module and we're done. Below that, we initialize the issuer. I'll just go over the issuer modules and then the rest of the modules. I think we can skip because most of the time it's the same stuff. But here we can see we actually register SCAR, Anocrates, Anocrates-Res, again, fits any VDR connections and credentials. Most notably here is SCAR instead of the in the SDK. Again, the new crypto and storage implementation, which is required if we don't use the in the SDK. We have in our disk module, we also have a registrar because we are an issuer and we have to register our DIT. We can also import it, what Ariel mentioned, so you don't really have, you don't need a registrar. But for this, we don't import it, so we have to register it ourselves. Network again, BC over in test network and the rest is all the same. So below we actually get to some functionality, which is creating and registering a DIT Indie. Code for this is fairly simple. If this is a test network, so feel free to copy this seat. It's all fine. Basically, what we do here is BC over in as like a web UI where you can create it. And then right here, we copy the seat that we use there. This is the bit that came out. So we also put it here. This would be fully qualified. This is also something we did with from 030 to 040 as we moved to qualified identifiers for Indie. And here we have the import function, which basically sets all of this up. It might look a bit weird that we use the same as the private key. I had a discussion a while ago why this was the case. I forgot it, but I think it was required because this is just how the Indie SDK did it. And for Asgard, we need to keep backwards compatibility. So this was done there as well, that the seat is actually the private key. And we can register a schema. Registering schema, nothing new actually in 040. It's still the same as we had since 010. So we just register a schema with the same attributes as always. We have a name, date of birth, email, and an occupation. We did change the interface a little bit. So it now looks actually very familiar to the DITS interface, which returns a state. And if the state is filled, then there are some error. And if it's not, then we can just return the schema ID, which we can then use for the branch definition. Let's go there. Credential definition. And again, everyone already familiar with this probably, you're only right now. But here we simply just register the credential definition with the schema. Same interface again, nice and consistent. If you don't want interface, you know them all. And here we just return it. And right here, we actually establish a connection between the issuer and the enisticate holder. I'm not going to go into that in there because yeah, nothing really changed there. We just created a connection in there. If you want to see the code, it is in the repository. But right here, we have something new, which is offering an Anacrid credential. So right here will be the API for it. We have an issuer and we have the credentials module. And we do offer credential. We use P2. We send it with the associated collection ID and these are the attributes that we want to issue. And also the cringe definition ID, which is required. A simple utility method here to basically wait until the holder has the wallet. And then we shut down the enisticate wallets. After that, we migrated. So this is what I meant before with we're using enisticate agent and every wallet that is currently deployed with AFJ uses enisticate. And with this very, very short script, you can migrate your enisticate wallet to the new Anacrid structure. You just have to know the database path, which is figuring out which path this is all provided in the documentation. So it should be a relatively straightforward migration. So if we go back, then here we can initialize the shared components holder. And we also initialize a verifier. Then we create another connection between the verifier and the holder. Here we request an Anacrid's proof using this. So again, the connection ID and here we basically create an Anacrid specific request. So if in your configuration you add something like I think for JSON-LD, a keyboard display here without a completion, and you can do JSON-LD, and then you can provide the request for a JSON-LD credential. Very simple. And then here we just wait until the credential is or the proof is presented and the verifier agrees with it. So let's run the code. I think I can go here. And then we can just simply run yarn start. And it will go through the code. If you try to run this after a workshop, every single step requires an enterprise. So we can go through it. So I'll just press enter. It's initializing the holder, creating the link secrets, initializing the issuer. Now it's going to register the objects that we talked about. So here we register the date with Indy. And then we register a schema with the full new qualified identifier. And the credential definition always takes some time. But there it is. I think if you go to the test network, you can see these objects now in the web interface. And now it creates a connection between the issuer and the holder. Then we offer the credential and it's accepted by the holder, which is again, all automatic. Don't really have to do any steps in between, which is extremely nice for just quickly developing something that we're shutting down in the holder. And now we're going to migrate it to the new shell components. So all the Indy credentials are moved into the new structure. The keys are moved. Everything that you would need as a holder or a mediator. Again, more on that later on. Then we initialize the holder with the shell components. Initialize the verifier. All very simple. Create the connection. And now the proof is requested. And here we can see that the proof was presented. And this is what we shared with the verifier as the holder. So all of this is now quite simply possible with the new zero for zero release. And I can go back to the presentation. Let's see. And I think Karim will tell something about our beautiful new documentation. Well, it's not that near anymore. But I did want to mention it because we have updated it significantly in the next one. So in the last, for those who were there in the last presentation I think about a year ago about the zero three zero release that was, I think, when we first introduced the documentation website. So if you go to this, I'll enter the chats. We can follow along. Eris.js.org. Yep, there it is. And then we introduced that documentation website. It was very, very minimal back then. And it's still sadly not everything has been documented. But I think we, yeah, we're getting somewhere. And the first thing I want to point out is that we have version documentation. So if you are still, we didn't replace everything with the zero four zero docs. So if you are still working in zero three zero, that is no problem. We will obviously advise you to, to upgrade and to migrate at some point. But the, well, the brief, the documentation that was previously there is still there. So that you can see here in the image in the right, the top right corner can choose for different version. And additionally, well, Bernd has talked about this quite extensively and shown all kinds of examples here that we have. And we already had that previously from zero one, zero two, zero two, zero from zero two, zero two, zero three, zero. And now, also from zero three, zero two, zero four, zero migration guides. So the API is changed here and there. The script that Bernd just showed you was really about the storage. So the, yeah, the models or the objects that are stored, they are slightly different in, in terms of, yeah, in terms of data structures, there needs to be some migration there. For that, you can use the scripts. But these migration documentation or pages here, they are really like, what we have the zero three zero API versus the zero four zero API and how to migrate. Basically, what, what lines of codes, what lines of codes do you need to change, how do you need to change them? So we really try to guide you through that there. And, yeah, lastly, what I wanted to point out about the documentation, which I think is quite cool, is we have these beautiful little tabs here, as you can see in the middle. So we have an indie and unencredced tab, for instance, there, this is just one example, there are many, which basically tells you, okay, if you're still using a new script, then this is how you should call this function. But if you are using unencredced, then you can use, well, then the function calls should look like this. I think we can move on. That was about the documentation. Yeah. Then I think it is time to talk about the roadmap. And there is gonna get that. Oh, no, this is Ariel's side. And server side, I have Jay. Sorry, I was, I was writing on the chat. That's okay. So, so yeah, actually, we, we are already supporting server side. Actually, in our, in both animo and us are using quite extensively, the EFJ for the backends, but this is something that we will want to focus on in the incoming releases. So that's why we wanted to put it here, right? But something that we want to say was that even if EFJ is mostly known for mobile environments or for the holder side, we believe that we think that it's good, it's worth mentioning that EFJ supports almost all the features needed to create an agent that can work for using other roles like mediators, because we, we do support, we fully support both mediation coordination protocol and also the message pickup v1 and v2. We have a very basic message queue in our default implementation, but we can plug in something more sophisticated like storing that in a Redis or in Postgre, we can add push notifications. So we have the tools to make an EFJ mediator to work or to build a basis of a mediator that is compatible with any, with all the protocols that are defined for that. And same for the credential issuance, because we usually use EFJ for holder and very fire roles, but we can also issue credentials in various formats. In the case of the unknown credits, it's true that we don't currently support revocable credentials, but we are very close to, to that. So maybe in a few weeks or so we, we have the PR merge. So in the next minor version, we, we will support it. We can continue the next slide. Some reasons for that. Okay. This is, this is mainly a, these are mainly our, our reasons for choosing EFJ because we have a very small team. You know that it's hard to find skilled developers nowadays. So we wanted to share as much as we could of the code base so the good thing about using EFJ for both types is that we don't have to master two different languages. This is not only a problem when you are dealing with with, with the, with the, with high level code, but also if you need to adapt a low level library because you will need to also create wrappers and you know that it's, it's not very straightforward sometimes to create them. So if we can make the whole team to work on the same environment with the same tools, it will be usually faster, especially in small teams. And also the other reason is that we wanted to create custom pictures. I mean, custom protocols. For instance, in our project, we are using Bitcoin for, for, for chat. So we are using, we have created some protocols to send media to other, to other parties, profile pictures and that stuff. So for us it was a lot easier if we should, if we just made a single extension for, for EFJ and instead of having to create two extensions for, I mean one for, for, for the server side and another for the, for the client side. And also some other nice features of EFJ are the storage backend flexibility is true that at the moment we are a little bit tied to the, I mean, we have, we have only the implementation of Aska as an alternative to, to Indie CK, but our goal in the short term would, one midterm let's say would be to, to, to make it more, not only led to agnostic, but also database backend agnostic in a way that we can adapt it to, to use as mentioned in the beginning of the conversation, to use, to use a browser or whatever. So, and also something that Karim already answered in the, in the chat is that we have, we do have an extension in the areas framework, JavaScript, X repo that provides, provides a REST interface. So you can, you can easily set up an agent with a REST API. So that's our, some of the good things about using AFJ on the server side. And that's so one question that I think I got from the beginning when I started to work on AFJ was when will we have web wallets? Which is a very good question. I think web wallets they, they serve a very nice purpose. And for me it's always, I think always demos came to mind like just a quick demo. You don't need to host a server for it. It's very simple. And with what we talked about with modularizing the framework, we, basically every, every module we separate, we get closer to full web support. Because right now the main issue is with the framework is we just have too many dependencies on node specific items or read native specific libraries. And they are just not provided for the web. Before that with the, in the SDK, there was just no like web replacement for the entire in the SDK. So we have to find alternatives for that. Separating everything out until their own module makes it a lot easier and more manageable to replace smaller parts with a browser implementation. For example, for a demo, you can probably create a, an implementation of Ascar for AFJ with using local storage and browser crypto. I think that that would be relatively manageable. There is also, of course, WebAssembly, which could use Wasi as a for storage. This is, of course, more of like a full, like actual web wallet and not just a demo environment. Like you wouldn't use local storage to, to store your credentials. But Wasi would, could actually provide a way to store credentials on your local device and access them from the browser. Packages like Anocrats are compilable to WebAssembly as well. So that's something we can also take a look at in the future. And in the VDR, I think as well, but the re-implementation of that for what we need wouldn't be, it would be a lot less effort than re-implementing the in the SDK. So browser support is definitely on our roadmap in the back of our mind. Like as long as core remains platform agnostic, we can create modules for everything that we need. And then we can slowly move to a first aid demo environment for AFJ, where you can just have a local credential and you can just be a holder in the browser. And afterwards we can move into more serious actual production implementations, for example, with Wasi or with decentralized web nodes, which would be a great thing to add to AFJ to support secure storage like that. So AFJ also really aims to support at least the required fields of the architecture reference framework, which is the European document that Karim mentioned before. We are relatively far with that. We have quite some basic setup for that, so storage also, you always need storage, but there are still some items that we miss. So for example, the first one is JWT credentials with select disclosure. This is something that Karim and I actually have been working on for a bit to add it to AFJ. So this is something that might already be there in the 041 release. Then there is ISO 18313, which is the MDL and MSO object group of specifications. This is mainly required for the proximity flow. So share your credentials over NFC and Bluetooth. This is something we would love to add to AFJ, but it's just not something that we have a lot of experience in. So we're sort of waiting for a TypeScript implementation for this that we can reuse in our framework. And parties are working on that. So we would love to have this before a 050 release. And to be more compliant, we would also need to support basically all the open AD for VC protocols, so issuance and presentation. The demo that I just showed you with the wallets that had all those items, but right now they're not inside AFJ yet, mainly because we had to do it a bit quicker and we didn't want to think about all the architectural things that we're doing for this project, but they will be backboarded into AFJ to fully support the open AD for VC stack, which is a very nice thing to optionally add to your agent. And the last item is the ARF mentions. I think it's required that all wallets that are compliant need a secure enclave or hardware support or hardware security module or I think Android, it's called shared secure storage or something. I forgot. But you basically need hardware bound crypto, which is extremely interesting and something we definitely also would like to add. We don't have a clear way to do this yet, but it's definitely something we're looking for to, for other parties to help us with or for us to implement. And also, even outside of the ARF, this is just a massive benefit to storage if you don't have to deal with exposing your keys in memory and being vulnerable to all kinds of attacks with that. I think maybe Karim has something to add to this. No, no, I want to add like we are pretty, pretty far indeed, like demo, the demo Baron previously demoed or gave shows that not only the open ID for survival credential issuance, that is currently included in the zero for zero release as well. But the verification protocols are not yet, but as as Pierre showed you, well in the demo, it works. So there's ongoing work into doing for implementing that as well as SDJOT or well, the SDJWT credential formats, we've been working on that and indeed we think that that will be added to probably the zero for one release already. Yes. Yeah, so one other thing, what we found out when you remove everything from the core is that setting up your agent becomes quite a big chore. We first, we just provided some configuration and some notes were recognized specific dependencies and now you have to provide all these modules and you have to know which one you want to use and it can get quite complex and we've definitely seen some get up issues from that already. So this is something that we know is a problem and that we are working on. Some of that work would include reworking your wallet a bit. I think this is mainly about separating the crypto from the storage, but Ariel might be able to fill in the end if I missed something. We also would like to make the core a lot smaller. Right now, there is still a lot of items in the core, very outcome specific and just some things that we would like the user to be able to provide instead of the core providing it to you. Not all use cases need everything and we want, especially when working with like native libraries, we want your app to be as small as possible and not everyone needs an indie VDR or not everyone needs the Indie SDK for storage when Ascar or your own implementation of React Native MMKV also suffices. So making the core smaller would make the app in our opinion a lot better because it makes it smaller. And the last point, which is something that I think is very important in making AFJ a lot more starter and beginner friendly for people that just go into the space is to create pre-configured agents configurations, which is a nice sentence. So basically here we would say like, okay, a very common use case is this Confitoo, Anacrets, Indie VDR, Ascar and yeah, maybe some configuration for your connections or a mediator or something. So we can create packages that would just contain that and then you could plug it in and then your agent setup would be back to just one line of code. For example, also think of like an ARF agent that would be compliant. So you just import one line and the agent that you're using is fully an ARF compliant or whatever interrupt profile may be defined. So an AIP1 agent, an AIP2 agent, something that we could have. Yep. Yeah, and also the VidCon support, we can also figure out the agent. At the moment, I don't know if, I think nobody asked, but we are also pretty close to support as VidCon V2 in AFShay. This is one of our priorities for the next major release. So the idea is that we will support both VidCon V1 and V2 initially by using Ascar as a backend for that. All right. Well, this was a brief overview of, well, was it brief? No, it was not brief at all. It was 80 minutes. This was an 80 minutes overview of the 040 release. There are a lot of questions in chat. Some have been answered, some haven't. So the rest of the time is really up to, up for questions. So, Sean, I don't know if you want to lead that or... I'm happy to. We've been, as the presenters, you, Karim, Ariel and Baron have been fantastic about answering questions in progress. So we can probably get some more questions now. Denver, I did want to make a point. You just asked about the meetings. So those are community calls every Thursday. They're not like a presentation like what the team put on today, but also we have the asynchronous discussions that happen in Discord. We have other workshops, and I'm going to put a link to that right now in chat. Workshops for Non-Creds. Stephen Coran, who was on the call a few minutes ago, he's gone now. Stephen Coran did a fantastic workshop on Non-Creds, and we have some previous workshops on Indian Aries that are hands-on. Jorge asked, will AFJ support adding new DidCom protocols once DidCom v2 support is added? Yes, definitely. So, I mean, it is right now. That is also possible, right? I mean, we have DidCom... It's currently still part of the core. So you can see DidCom has two, or yeah, it has two, I guess, different facets. You have the actual DidCom core logic, right? And then you have protocols that are built on top of that. And let me quickly find how to go back all these slides. Is that easily done? Yeah, here. So as you can see here, some, and I think that is the plan to do in the coming months, is, for instance, these here, these to the action menu and the question and answer protocols, those are both DidCom protocols, they are defined as separate modules. Because you don't, like, it is not every, not everyone needs those, right? Not everyone needs DidCom as a whole, especially now with the introduction of OpenID for VCI and, well, OpenID for Fairfiber credentials as a whole. You might not want to include DidCom. So the plan is to move DidCom out of the core as well. So it is basically an optional dependency. You can include or not. And then with a separate DidCom module, you'd be able to define other, yeah, other modules that basically extend the DidCom or build on top of that DidCom module. Yeah, so definitely. Cool. And DidCom is the reason why I got involved and decentralized identity begin with. So I love DidCom. All I want to do is make, is never have to use LinkedIn again, thanks to DidCom. Someone just asked, does a non-cred's package support any mis-approved crypto? This is Ney, if Jake Hall and Steven has left, can anybody answer that question? So I think the very short answer is no. We use RSA by sales signatures. I think Steven or Mike, probably or Andrew can give away the answer on this. But yeah, the answer is no. I think there are looking for, at V2 for PS signatures, which might be ES256 based, but I'm making a very big assumption there. I haven't looked at the specification for that. But I do know that V2 is looking to broaden their crypto suites. Cool. Viet asked, can we revoke credentials? Yes. I think that that question was actually answered a few times. Yeah, so for unencred's, that is true. In a sense that I think we can, we're still working on the issue inside of that, but at least you can, so no, I would say because the issuer is the one that revokes. That's not yet supported, but you can prove that your credential hasn't been revoked yet as a holder. For, because, I mean, you have different flavors, if you will, of revocation. So for W3C, you also have like the status list 2020. One, two, I don't remember. That is not yet supported. So, but that will, will probably be added. The PR is almost finished supporting that. So at least an initial support, I think, will be there very soon in a few weeks or so. So I guess when, when Timo comes back from his vacations, then everything will be done. I'm going to tell Timo he said that. Do unencred's have any way to add NIST approved key types? That's the same question, right? The last one was about revocation, but this is about NIST approved key types and that is AJ had asked that. One before that. Oh, maybe. I think there was one about NIST. I'm going in reverse and I'm getting lost. Okay, cool. Was there any, was there any consideration to use the FIDO standard to store and manage keys on a mobile device? Not that I know of. I am personally also not too familiar. I've looked into it a while ago, years ago, but no, not that I know of, but I maybe someone else can comment to this because I'm not at every working group. Anybody want to comment on that one? I think if you really want a correct answer for that, I think opening an issue at SCAR would be the best because basically if SCAR supports it, then we support it, but I'm also not 100% sure. No worries. Nile just asked, following up on an earlier thread, so the unencreds and NDVDR modules aren't necessary in ND SDK, for example. And then I'm sorry, I just noticed Ariel replied. All right, great. And then there's a GitHub link. Okay. Denver asked, in terms of setup, there are quite a number of issues for Yarn install on Windows for Node version 16 and 18. Are those being recognized and is it affecting anyone else in terms of the demo running? Oh, that's answered. Yeah, but that's something that maybe it's important to clarify for because there was a, there were lots of questions about that in the few, in the last few months is that we do have in the Node Changes environment, we do have some problems with Node versions below 18. So we support, we will say that we support Node 18 onwards because there is an issue on the JavaScript wrappers that prevents the shared components to work with the performance they should. So to make it short, you will need to use Node 18 to properly use AFC on Node.js. Okay. Can you explain a bit about unrevealed attribute support in 0.4.0? So, yeah, I read that question. I do not completely understand it. I guess this is hinting to, sorry, to select a disclosure, but I'm going to pass this to Baron because he just nodded me that he knows how to have it. Yeah, so I, yeah, I assume it's about select disclosure and ZKPs. So nothing really changed between 0.3.0 and 0.4.0. We still support all the unacreds, zero knowledge proof way. So we predicate, select disclosure, all of that is still in there. For JSON-LD issuance or verification, I think we don't yet, but that's mainly because of presentation, percent proof V2 support. But we do support the underlying crypto to select the disclosure with JSON-LD. That's what it's called. Any plan to support the banner runtime? I'm sorry, Trion, I stole your role. No, take it, go. Yeah, I, at one point I tried to create a demo wrapper for the NESDK, which was sort of working, but then I, we all moved to the shared component. So I left it work. I think it's definitely doable that then I'll provide a, if I interface by default or a way to do that. And no, it doesn't really. So I think that would definitely be something cool. And also, yeah, definitely something we're open for, for contributions. I think we can get some pointers and it would just be like ESCAR and NVDR and unacreds. Yeah, I'm going to give it back to you, Sean. Yeah. And if you'd like to contribute, please check out the Aries JavaScript channel on Discord and the meeting that the Aries JavaScript community has every Thursday. I'm going to repost the links for this session in a second, just so everybody's got it. Someone asked, Yeah, cool. Yeah, cool. Our father. Yeah, cool. Come on. You're ruining the presentation now. Deno question. I have a question from Denver from earlier. Nope. Somebody just answered that. Will it be, is the last open question that I can find, will it be possible to have more frequent meetups? Nope. Yes, we are. Every week he meets. And that it's less of like a demonstration and a presentation like today, Denver, but the team is definitely meeting and they're always having conversations on Discord. And we are also to jump in, we are also like, the new features are being demoed there, right? So if someone worked on a PR or something that gets in, then they demo it. Yeah. I'm out of old questions. So I've been basically, if a question has been answered in the chat, I'm not announcing it again. I think that's it. Anybody else have a comment? Yeah, we have a question from Yaku about the, if we suggest to migrate from 030 within the SDK to 040 within the SDK or go directly through the shell components, I will suggest to move directly to the shell components. I don't know what the other guys say about that. But can you, because I think the question was about scripts or not? Like, how do you suggest? Yes. If you are using, the variation works only for on the mobile side, right? On SQLite? Yeah. So it's only mobile. So specifically for the Inusdk to Escar, it's only mobile SQLite. So only mobile ish. We also have support for mediators because basically we currently don't support migrating the schema and the credential definition. So a holder or mediator would be perfectly fine to migrate with that. You can directly migrate from the Inusdk 030 to the shell components with 040. But there is a specific order in which you need to run the Escar migration script and the AFJ storage updater, which is all explained in the documentation from migrating to from 030 to 040 and also migrating from Inusdk to Escar. So the docs will probably give you every bit of information you need. Are there any other questions for this team who've done such a great job with this presentation? All right. I would like to thank everybody for joining us today, and I would really like to thank the presenters, Ariel, Baron, and Kareem for doing such a great job of both going into detail on what's new in ARI 0.4.0, but also, you know, how and the why and wherefore how things work. It was really a great presentation and thank you. As I mentioned at the start, Hyperledger is powered by the contributors and maintainers who make all these projects work. We would love your contribution. We would love you to be involved. We'd love you to use the software. Check out the Discord and the GitHub and the Wiki for more information. I'm going to put the links back in here again. Just a note. When we do a workshop like this, let me grab a copy. When we do a workshop like this, I generally, for folks who registered, I'll send out a thank you note. I'm going to put these links as well as a link to the Wiki page for this workshop in both the YouTube video as well as in the thank you note in case you want to share the video or share the deck with your colleagues or co-workers or project partners, whatever you're doing. YouTube is going to encode the video over the course of a half hour, 45 minutes, and then I'll hopefully be able to get that out by either end of day today or first thing tomorrow morning. But I'd really like to thank everyone for attending, but most importantly, thank you, Kareem. Thank you, Baron. Thank you, Ariel, for doing such a great job. Thank you, Sean, and thanks to everyone for coming as well. Have a great day. Have a great day, everybody.