 Hello, good day, Bruce. Good morning, Lance. How's it going? Good. We might have a small group today. Might just be us, so we might be able to keep it short. Well, yeah, that's right. Hacken was going to tell us next month where he's working now. Yeah, and next month being this month, right? Must be, yeah. Yeah, and then Rodolfo is traveling. I think he's in Thailand at this point, so good for him. All right, well, let's get started. Hello, everybody. This is the 2023, February 6th, Ares-Ditcom v2 working group meeting. And I should remind you of the Hyperledger Antitrust Policy and the Hyperledger Code of Conduct. And we have a small group today. Please, Bruce, if you'd like, you can add yourself to the attendees. It's up to you. And yeah, let's just go through and do some updates. Or I can even add you, if you want me to, Bruce. It's up to you. Zoom isn't showing me the chat today. Oh, yeah. Oh, there it is. It just rearranged. It's okay. I can do it. Okay. Yeah, I should also post the link to today's... Let's see here. Yep, there's the link to today's meeting. And okay. So I edit mode, yep. Okay. Okay, I don't have anything much extra to say right now about the Ares-Agent Test Harness. I was working through... It naturally wants to spin up a VON network and a tail server and some of these other things. And I'm trying to sift through how much of that stuff would actually be needed for like an agent that's focused specifically on DidCom v2 and not as focused on especially like indie-based things. But of course, the Ares-Agent Test Harness has to do some of the... Well, it needs some things, some supporting ecosystem in order to execute its tests. So I'm kind of trying to sift through that. And well, I was super busy this week. So I didn't get to spend a ton of time on it. But hopefully, at some point, I can report that I've been able to run at least a single DidCom v2 test with an agent that is not an Ares-Agent at all or what we would consider a classic Ares-Agent in terms of, you know, Occupy and AFGA. But this agent would be something else. So yeah. Cool. That would be great. Yeah. Is there anything else, Bruce, that you want to give in terms of updates? Scanning through. See if anything, if I have anything else. Yeah, anything on PICOs? Actually, we have, just so you know, we have a customer that is interested in Internet of Things IoT devices and DidCom. So we mentioned the PICOs project. We said we didn't know a ton about it, but we know a group that's working on it and that it might be nice to look into. Cool. Thank you. Yeah. Yeah, the students have had made some progress, but nothing I can readily articulate. They're still at the same point that's shown here. Okay. It's just they're not as stuck at that point as they were. So they see they're right clear to move forward now. Okay. Yeah. Very good. And the work they're doing, I see this, let's see, can I do that like that? Yeah. I guess we had noted that this is DidCom v1. It's not even all the way to the DidCom v1. Okay. It predates that it's part way through AIP1. Okay. The approach we took was to was to use VC as a service on top of the trust over IP layer 2. So we didn't implement any of layer 3, but there are services in particular. We worked with Trinzik that will provide an API for issuing VCs and for providing proof requests and verifying the credentials that are provided in response to that. And so there's no code in this repo for doing anything with VCs. But it is a pretty complete implementation of layer 2. Okay. Great. At the DidCom v1, with the DidCom v1 kind of packaging and so forth. Okay. Very good. Sorry guys for genuinely, does that mean that the people that have to be connected to the internet? Pico's by their nature are connected to the internet. Yes. When a Pico is born, it immediately presents a fairly rich API from birth, and then you just extend it by adding real sets to it, then you enrich its API until it does what you want it to do. And one of the things you can do to enrich its API is to install the code lance was just showing into the Pico. At that point, it becomes a capable layer 2 agent. Nice. There's like a basic API and then you extend the functionality. Yes. Thank you. Yeah. Thank you. Yeah. Very good. And I didn't know about the association with Trinzik. So that's very cool too. They were just the vendor we chose for VC as a service. Okay. Gotcha. I haven't seen them in any Aries meetings for the last year or so. Yeah. I think they're content with what they've implemented. They're able to use it with their customers and so forth. Okay. So I don't know that they have any plans to go to did come V2 at all. I'm not saying they don't, but I'm saying that I'm unaware of any such plans. Yeah. Is it okay for me to note them here? I'm sure they would be happy to have a mention. Yeah. Good. I just talked to Riley recently. Oh, yes. Just to touch base with him. And I was noting to him that because I had gotten spun up into this community, especially the Aries community over the last year, it was like I was seeing the fingerprints of Trinzik all over. Oh, yeah. All over the place. But then at the same time, I was never meeting any Trinzik people in the meetings while I was getting to know Indicio and Animo and you and many others. And so I said, I kept telling myself, eventually, I'll come in contact with the Trinzik folks. And I had actually met with Scott or I met Scott at the last IAW. He's one of the developers if you don't know him. I have not met Scott. Yeah. So then I kind of, in my brain, it was like, okay, at some point I need to reach out to Riley and say hi. I had actually learned some stuff from him in a recording interview that he had done over a year ago. And so, yeah, it was cool to touch base with him. Yeah. That's cool. Yeah, they were originally Streetcred. Okay. I didn't know that. And it was the first company that was based on the technology that predated DINCOV-V1. They were one of the very first mobile agents and they implemented things as they were at the time when I implemented Acapico. But they were a for-profit company at that time already. Gotcha. Okay. I think that was September or so of 2019. Gotcha. Yeah. They work in the shadows. They certainly are in our shadows. Yeah. Yeah. Yeah, very good. Good. Okay. So is there, let's see, I don't know if do we want to talk any more about, well, I don't know anything about KRL. KRL is just the programming language that is used in PICOs. It transpiles down to JavaScript. Gotcha. So every rule set becomes a JavaScript module that the engine that provides runtime support for the PICOs knows how to request those modules. Interesting. So when an event comes into a PICO, the engine resuscitates it because a PICO primarily spends most of its life as data. You can think of it as a JSON, a big JSON. The PICO engine brings it in and finds out which rules need to be considered for the event and then puts the PICO to work by requiring the compiled rule sets and executing them. And then when it's done, the PICO goes back to sleep. In a busy environment, the PICO engine provides a queue for each PICO so that it handles the events one at a time. As a result of that behavior, the KRL programmer never has to deal with critical sections or worry about concurrency because the engine will be giving it one thing at a time to do, which it does to completion and then goes to sleep as it were. Well, that's pretty much ideal for most things that agents do. Generally, the agent needs to is alerted to something that's happening and takes care of it and then it just remains, as Alex said, in the background. How does PICO handle the team management? Do you create the private keys on the device? Do you use a service for that? How does that get managed from a people perspective out of your resume? That's a great question, Alex. So the PICOs are, let's see, you've got the actual rule set there, I think. No, I don't see it there. There's a rule set that associates did. Oh, you're in KRL. Would you mind going up one and then down to next? Okay. Because the PICO engine has different versions. Yes, go ahead and look at that rule set right there. Let me also make it a little bigger here. Yeah. Oh, thank you. Yeah. So just scroll down towards the bottom of it. There's very little code involved here because it uses what we called at that time the ERSA module to pack and unpack. You can see an unpack happening at line 22, 23. And there's only one rule in the rule set, which is to create and save a did. And on line 33. So when the did is requested, we get the channel identifier. And we created a did for it. And that happens in line 41. The new did function is called. And we store it in an entity variable of the PICO. PICOs are persistent. And so we store the did that is for this event channel identifier in a map of dids. And so that's stored by the PICO engine in the database that it maintains behind the scenes as it were. You've heard of serverless programming. I assume. Okay. Great. Okay. So serverless doesn't mean there isn't a server. It means that you don't have to worry about it. And in a similar way, PICOs are database less. It's not that there isn't a database. It's that you don't have to have to select it, buy it, configure it, connect to it, and so forth. The PICO just has persistent data. So you can see that happening in line 41 where we say the entity variable dids indexed by this event channel identifier contains the new did. Yeah. And it's like a channel and like a connection. Yes. Yeah. And so at the very, at the very top of this, let's see, new did, let's scroll down a bit to see the new, oh, sorry, up the new did function. There it is right there. Line seven. And so at line eight, we ask Ursa, which has some JavaScript code that we build into the PICO engine. We ask it to generate the did, and it does so, and then we store it away. I don't know if that answers your question. Yes, that makes sense. I guess the question that I have is like, what's the difference between a PICO agent and a PICO? Like, how does that, what's the architecture? Like you said, there's like a database, like in server lists, like you said, there's like one big server that like takes track of like events and then spends up code. How does that work in the PICO page? So that like one main page and like the coordination between different smaller devices. So you have as many PICOs as, as you need. And each PICO behaves like, like a very small virtual machine. And when I say very, I mean, so small that you wouldn't even think of spinning up Linux on it. It's much smaller than that. And it is supported at runtime by the PICO engine that, that happens to be hosting it at the moment. In principle, a PICO can move from one PICO engine to another. If you think of a PICO as being a bundle of data and, and, and process, the, the data is, is a very, a very large JSON object. And that data could be, could be reconstituted or reanimated on a different PICO engine. The data, the very large JSON would include the keys and the dids and so forth. Got it, got it. I see. Okay, yeah, that makes sense, their potential when people make something that's familiar with it. Cool. Thank you. Thank you. Thanks, Lance, for facilitating with the slides as it were. Oh, yeah. Happy to. This is fascinating. I honestly know very little about IoT or, yeah, it's, it would be very interesting for me, you know, to dive deeper into this. So the connection between PICO as an IoT is both historical. That's what they were invented for by, by Phil Windley, but not necessarily. So a PICO is kind of ideal for an IoT device because it generally doesn't communicate with its handler very often. And so a PICO can, can just hold the data that it saw last or it can hold a month's worth of data that it saw recently. But PICO can also be used for general purpose event driven systems. And that's where the agents, where PICO as agents becomes handy because you can have multiple entities that are, are running at the same time, each represented by a PICO and they can be communicating over secure channels. And some of them will, some of them will also be device shadows or digital twins of IoT devices. Others will be representing persons or, or other things or ideas or concepts. So And so that communication layer is, you would essentially like it to be DidCom V2 or we talking something different? Yes, that's, that's the connection to this meeting. That's what Phil, Phil would like the, the communication to be up a layer. Currently communication between PICO is done over the, over the channels that they open up to each other. You know how on, on a, on a classic Linux server, you open up ports to the outside world and you're limited to what 64,000 or so ports. A PICO has an unbounded number of channels that it can open up to, to the outside world. And best practice is to give a different channel to every user and purpose. So the same PICO might, if it's used by a thousand people, people or other processes, each one of those has its own channel into the PICO. And if one ever misbehaves, you can simply delete that channel. And the other, the other users continue on happily about the one mis, miscreant is, is blocked out. Interesting. And, and Phil would like PICOs to communicate with each other and, and other agents using DidCom V2 built on top of, of that. So in the Did document, the endpoint would include a channel identifier to reach the PICO, but every communication would be a DidCom message. Interesting. Yeah, very good. And I think I see that Phil, I don't know if he has a new book coming out, but he's doing a presentation with NDCO at the end of February. Yes, yes, his book should be out by then. Great. On digital identity. Very good. Great. Okay. Anything else that we want to talk about in terms of PICO or internet of things? Okay. Good. Just scanning here to think if I have any updates. I was super busy last week, so I did not attend the ACAPI meeting. I think I call it just a little bit of the AFJ meeting, but not much. You, you were mentioned in the Aries meeting. Okay. Something along the lines of Sam Curran saying, if Lancer wrote it, we're here, they could tell you more about something to do with DidCom V2 probably. Yes. Okay. I didn't speak up because I didn't consider myself an authority on the topic. Yeah, fair. Okay. Although at this point, you're probably in the top 10 in the world. Yeah. Okay. There is movement in, and maybe we should do, are we listing the trust over IP? Let's see. No. Let's add that to just over IP trust spanning protocol. They're actually going into, well, it looks like they're going to move into the proposal phase where, you know, kind of beyond the introduction phase of, you know, what is the trust spanning protocol or what's the goals of the trust spanning protocol? And they're, it sounds like Sam Smith is going to present on Wednesday at the next meeting. And there is a video. I should grab that link. That's cool. It sounds like Daniel is bringing the various parties of his grand unified theory together. That's cool. Yes. I saw that Sam joined the task force, like he's become like one of the co-chairs as well. And yeah, I'm curious whenever Drummond is going to bring Daniel in from DWN and have them all have like a three-way conversation, the two Daniels and one Sam just to debate it over. And yeah, okay. So he has a video presentation that everybody's supposed to watch that it diagrams his vision for the trust spanning layer. Yeah. So no TSP vision. So I just started watching it, but I haven't gotten through it. But it's almost essential. They said essential to watch at this point if you're going to attend that, that Wednesday meeting. So thank you for that there. And I'll put it in the chat as well. Oh, thank you. Yeah. Okay, very good. Okay, we wanted to do a did com v one verse v two, which I would have asked in the Wednesday meeting. But yeah, I was stuck in another meeting. So I never got to ask about that. And so anybody have anything to add for our simple explanation of the benefits of upgrading? Yeah, I'm not familiar with it on view one specifically. I think it's just highly tight. We like tightly coupled with like every system. But yeah, I don't know. Yeah, fair enough. Oh, that's that's interesting that that comment enlightens me a little bit. So you're saying that the areas RFCs are tightly coupled with did con v one. The areas are the couple within the every single system. Okay, like it can be two is like a big specification. So that's okay. So so its specification is an identity foundation. Yeah. Thanks, Alex. That really helps. It sounds like I'm just called we have people who know v two well, and one person who knows v one only. So I'm very interested in this topic, but because like my understanding is that like they can be one messages and they can be two messages themselves, the structures, the headers, like those like should be similar. I think like what changes is the encryption envelope. Oh, the majority of it. Well, thanks. Thanks, Alex. Yeah, that really helps. That's my understanding. But yeah, I'm not familiar with the one I've made today. So that's all everything that I've read them talk to people about. Yeah. Okay, cool. Thank you. Okay. Good. In terms of a IP three, again, I wasn't at the last areas working group. I don't know how much they talked about it and maybe expanded on it. Maybe I should open to see if that document has changed much. I think about a week ago, I started listing these areas RFCs that are being replaced or overtaken by what did come the did come v2 spec. But this list is pretty little as much should be much longer than this. And it's a little bit difficult to kind of ferret out because did come v2 because did come cover so much touches so many different parts. It's a little bit difficult to kind of sift through. Well, what should I what should I include in this list? And what shouldn't I there's some obvious things that I haven't put in here yet, like the out of band 2.1 or 2.0. There's a 1.0 for the areas RFCs. Yeah, so more work needs to be done here. But I kind of tried to start enumerating the list of things. Well, that's that's that's great work. Actually, if what would be really helpful would be to see maybe a two column presentation. So probably that would mean that this note would refer to to a different page. But we're on the left, you would see the did come v2 spec for something. And on the right, you would see the did come v2 spec for that same thing. Okay, as you did. I mean, it was very it kind of clicked in my mind when I saw the the the five dash did come see did come v2. So the two links side by side. Yeah, because areas RFC five has an explanation of did come. And that's the areas. And did come v1 spec for did come. And then the identity foundation has has a different a different explanation of did come. Yeah, it would be interesting to see them side by side. And then of course, trust being one and trust being two would be side by side and so forth. Yeah, okay. Yeah, that'd be very helpful for someone coming from v1 to v2. Okay, that sounds good. And as as to the list you said of what should I put in here, I would think that AIP one or maybe AIP two would be a good a good list. In terms of okay, so in terms of things for the left hand side. Okay, right. Okay. Yeah, very good. Okay. Let's see any other updates. Maybe I let's see, can I see a history? I'm not sure how to see a history of changes. I am not at all familiar with HackMD, but I wasn't aware of this view either. This is pretty cool. Yes, with the two panes on it. Yeah. Yeah, when I worked with MD and GitHub, I kind of mentally have this right side versus the preview side. This is cool to see them side by side. Side by side is a very powerful pedagogical tool. Yeah. And I feel like VS Code also does a nice job with that. Or am I thinking of IntelliJ? I don't know. I started coding on a teletype machine and I have never made the jump to IDEs. Oh, wow. Okay. Well, anyways, just rolling through here. Yeah. Good work, Lance. We've been looking forward to you're putting that into this. Okay. Yeah. I think that was the thing we were talking about in the areas meeting last Wednesday. Gotcha. Was that you were going to be doing that? Right. Fair enough. Okay. Maybe it might be worth bringing up the Aries Working Group notes and just seeing if I was looking at the different IP profiles and noticed that there's like an RPC0587 that said that there's like an Inviton between some envelopes that came up with like two years ago whenever the Inviton between tech was still regularly not implemented yet. We can mention that. Oh, you put that in the chat. Thank you. Oh, that is terribly useful. Thank you, Alex. So these are the encryption envelopes that end up becoming part of Didcon V2 then? Yeah. That's kind of my understanding. Yeah. Cool. Thank you. This is going to help this old dog learn a new trick. Yeah. And in the spec, I think they call these out as profiles, right? Or am I wrong about that? Let's see here. Yes. As you can see, AIP 2.0 just near the top right of that document. A sub-target of AIP 2.0. Prepare for Didcon V2 sub-target of AIP 2.0. Interesting. Let's see. Yeah. Defined profiles. Here we go. I think they call it a sub-profile. That's the terminology they use for this because I was looking, there's like also like a sub-profile for like the indie credentials and BBS plus. Are you saying from the spec here, the defined profiles? Yeah. I think if you click on AIP 2.0, on that link, there's like different sub-profiles. AIP 2.0. Yeah. Yeah. Some are in here. Just scroll down like it says that the different sub-profiles. Sub-targets. Yeah. Sub-targets. Yeah. Yeah. Yeah. Cool. Yeah. Like indie cred is like a sub-target or AIP 2. So similar to what they have in three AIP 3.0 here. Correct. Correct. We're like the different sub-profiles. That's the way they sub-target. Sorry. That's the way they classify them. Yeah. The light's coming on more. So they're targets because this is what an implementation is trying to hit or reach. Okay. Yeah. Okay. Let's see. I was trying to bring up the... It is rather a long list, but I think all of these concepts and features will have something corresponding to them in didcon v2 or AIP 3.0 both. Let's see here. Harry's working group. Okay. The meetings list. Okay. So bringing that up. Okay. Two discussion topics. Domain validation for issuers and verifiers. I think it's possible to connect existing systems and dids as in an important undertaking. Boost driving adoption and usefulness of dids. Yeah. The broader concept here is that there are already ways where we feel we can trust things. For example, you can trust an email message by sending something to the email address and expecting some action in return. Or you can use DNS to find a text record or something for a particular domain. These are different ways of doing the challenge response idea. And so this was saying, let's integrate some of the classic ways of doing challenge response into didcon. Interesting. I feel like I'm not as familiar with... I'm not sure I've heard of this before. Or am I just blanking? This was something new on the call. So I'm doing my best to recreate the spirit of the discussion. Yeah. In a call that's almost a week old. Yeah. Sure. Well, I can always go back and listen to it. I probably should. Yeah. That's true. There is something, a concept of well-known. You can put a dot well-known folder on your domain. And there are various uses of that that are customary. Gotcha. I think this is... Yeah. You can see that in the participate thing there says decentralized identity dot well-known. So it's using the dot well-known feature that's used by many mechanisms. Webfinger and... Sorry. I'm getting into some older history here. But it's a place where the host of a domain can put things that they want to be well-known. Yeah. Kind of like a robot that takes the similar... Oh, yeah. Yeah. Yeah. Yeah. Because I think that the well-known is like used in the didweb folder. That's where we put the documents. Gotcha. So like in the didweb, like it's usually domain slash well-known slash a resource slash dad or it's usually slash dad.json. Yeah. If it's for that domain. Okay. So is that part of the didweb method? Yeah. It's part of the didwebspec. Ah. Cool. Yep. Yep. There we go. So yeah. It's usually like if you go slash well-known slash dad.json, you get like a dad document. Yeah. But like the problem I guess I could do is like again, like as long as you trust the DNS resolution, you should be fine. Yes. If you don't trust DNS resolution, that's where the problem comes in. Yeah. So Sam Smith would not love this guy. Yeah. Fair enough. Okay. Yeah. Very good. And like I also know that there's some work happening with like X509 certificates, trying to turn those into dids. There's like a did X509 that's happening as well. Yeah. For this. And I know that they're trying to, I know that the guys are saffron, they're trying to like kind of combine X509 and didweb into like a new method that's kind of a little bit more secure. Yeah. In terms of like web BAD stuff, that's all I know. Yeah. Very interesting. Are you aware that historically certificates were intended to be used for individuals as well as for corporations? A little interesting historical tidbit. So the Netscape Navigator browser was the most popular for quite a while. And they provided you free access to browse the web using Navigator. But you could also purchase a license and publish using Navigator. And part of the purchase price was to obtain a certificate for you as a person. And then you would publish web pages using the browser. And of course, human nature being what it is, people used it, used the free version and very few people purchased the premium service. And as a result, we now live in a world where it's corporations who have certificates and individual users of browsers do not. But the intention originally was that it would be a bidirectional thing. The way it did is from conception. So it's interesting to see them coming together again now finally after all these decades. Yeah, that's cool. Yes. Oh, I see. It's kind of like a credential, like a very favorable credential. So like they put it, if you put it behind the paywall, right, like human nature is not incentivized to do it, right? People like it free. I see. So we definitely don't want to do the same mistake, like you want to have companies and people be on the same level. Absolutely. Companies are incentivized to pay for things. People are not. So like you said, human nature, it's usually the problem. It's not technology. That's right. Yeah, the hard problems are the social problems, not the technical problems. Yeah, it's people. Yeah. Interesting. Yeah, thank you for the history lesson. Yeah, you're welcome. That was a long time ago. Fascinating. And of course, recently, certificates have become free with less encrypt and so forth. But it was too little, too late. I mean, the ship sales. Yeah, that's true. Yeah, that's unfortunate, I guess. Oh, interesting. Yeah, because like now it's free, like you can use less encrypt. Yeah. So like your point, I think, is very important, Alex, that we must be careful not to make that mistake. Otherwise, we end up doing the same thing, right? Like it's cryptographically signed and bound and all that, like you still have the same property, but like you have to use it slightly different. At that point, yeah. Yeah. Fair, fair, fair. I mean, like hopefully computation is like cheap enough to where like people don't need to charge for, do you know the reasons why they did charge for that out of curiosity? I'm actually curious now. Is it just like to make money or is it just... Yeah, they were hoping. Yes. Yes, I, yes, I believe that's what it was. So there was Netscape Navigator and Netscape Navigator Gold. And I, of course, being a human being did not purchase gold either. And so I don't really know what they offered. But yeah, I assume it was, I assume it was to attempt to pay for all the work they had had done already to create Netscape Navigator. They were trying to recoup their costs. And it's unfortunate because we could have had something like it did come 25 years ago, if not for that. Money, the root of all evil. Yes, indeed. So just knowing that history, you know, you hope you're not doomed to repeat the failures, right? But if you look back at that and say, well, if they had kept it all free, how would they have made money? That's the next question, right? You know, it's the chicken in the egg. It's kind of like, oh, they should have never charged for it. But, you know, either way, maybe they they end up dissolving, right? Like the money would have never existed, right? Yeah, it's Tim Berners-Lee really deserved to be knighted because he chose not to patent or protect in any way his invention of the three languages that make up the web, URL, HTTP and HTML. He chose to give those to the world. And that's why we have the internet, as we know it, because he chose not to take a profit. Thank you, Tim. It's turned out pretty well for him. So maybe that's the lesson is to not bolt on a commercial scheme until it's obvious that you need it and that it will work. I don't know. That's a great point. I mean, most corporations, Wall Street shackles them to a quarter, make a profit this quarter, philosophy. But if you take the longer view, you can do something much more useful and in the end, more valuable. Fascinating. Okay. Yeah, I'll save that off to the side because certainly, as you know, Roots ID has been open source, open standards focused. But, you know, it is truly a challenge. We're also a company, right? Otherwise, we'll have to go get regular jobs. And so it's always interesting to consider how do you make money and keep the lights on while contributing? Not making the same mistake that people made 20 years ago. The business of identity is like, yeah, it's already really hard. Like, yeah, the business side of things. Fascinating. Okay. All right. So it does look like they mentioned something about the tests and Aries agent test harness. Yeah, certainly the state of implementation should be well described or well scored by the test harness, assuming the test harness has all the all the right tests. That's, yeah, we've talked about that. And I'm still trying to get there. You're still trying to get good enough with the test harness myself to be able to demonstrate some kind of score that's, you know, we'll say a pre-AIP3 score. But, you know, maybe even before that, we've talked about, you know, DidCom v2 simple or whatever we will call it a score. So, okay, very good. I know both you and Alex are mentioned in this. Yeah, yeah, we keep my task. Yeah, good. Alex, remind me to create a issue for this on our internal stuff. Yeah. Okay. How do you credits require? Okay, very good. All right, we have eight more minutes. Anything else that we want to discuss for today? I'm just looking over our previous stuff. Should we specify how the did methods like did peer are used? We're focused on DidCom v2 communication, but does the rest of the AIP community know that? I think they do. Yeah, and we keep saying that. So, yeah, we're doing a good job, I think on that. Wacky issuance. Do you need to be able to resolve ND checked, et cetera, in order to issue credential? Yeah, that was a question we wanted to ask. And discussed sub roles, right? We talked about the roles. Any other thoughts on that? I think that was an interesting discussion that I haven't spent much more time on, but we were talking about essentially breaking maybe the issuer role into multiple issuer type roles to capture sub parts of a protocol. Any other thoughts on that? We had said that you could either do more fine grained roles or you could do more fine grained protocols, essentially these sub protocols and have kind of the larger credential issuance protocol be a set of sub protocols. I was going to just say that I have experience using very simple issuers and holders. So I think, am I annotating your view as well by drawing that line? Oh, yeah, you did. So this is what I use. And then I just do that. The issuer just up and issues a credential by displaying a QR code in a place where the holder can see it, and then the holder receives it and that's it. Yeah. And there's no acknowledgement, there's no preamble, it's just there and they receive it. And then similarly on the present proof side, a proof request is presented and the holder responds using their phone agent and it's finished. So last week when we talked about this briefly, I was thinking, wow, I didn't realize it was so complicated. Yeah. And I think that this is natural to start here. And so I feel like that should be specified. But when we mentioned that, I guess I could see how when they've planned this more elaborate protocol each piece has value. And so to say, well, can we have something lesser? It's maybe tough to swallow, or maybe that's, I don't know, maybe there's some wisdom there to be had. But I feel like there's also wisdom in what do people actually do. Yes, that's why I brought it up. I had a need for an issuer to get a credential into a holder's wallet. And so that's what I did. Right. Yeah, totally fair. And similar for you, Alex, right? When you guys, when JFF was doing things. Yeah, I guess to me, I already said my two cents on the sub-protocols, sub-profiles. To me, it makes sense to have the protocol level, let's keep them calling protocols, like issue and simple, you just like send a message, right? Like it's just one message. Like you said, without the acknowledgement, there is like a simple with act, right? And then you have like simple with request, like simple with proposal, simple with offer, or like you have like a pool that has like, you know, you can issue multiple credentials, you can recost multiple credentials before issuing a credential. Like those are different flaws in my mind. Like to me, it's like we don't have like small ones that add up to the big one, or you have the big one, and you break it down into small ones at the protocol level. But like trying to break down like what messages, I feel like we don't usually do that for other protocols, right? Like you would like basic messages, like you wouldn't, like you create a chat message, like you wouldn't say, hey, I'm only implementing this role of this chat protocol, like you would use a different chat protocol. Yeah, like you, the exchanging of like setting up a chat can be different between different chats and whatnot, like the messages themselves can be like similar. So those can be different like chat apps, per se, right? Like, or whatnot. So those would be different protocol, like, that's the way I think about it. It's like before sending messages like in a chat, like you need to authenticate people, like, what I recognize in the selfie, or like you don't have to send anything, right? Like, or like you need to have a credential to like set up a chat with that kind of stuff like that. But those would be all different like chat protocols in my mind. You don't have like a chat protocol and like have different setup mechanisms. At least that's the way I think about it, like it makes more sense to me that way. Yeah, but yeah, it's still very new. That makes a lot of sense. It just occurred to me that that one could add arrows like this without even leaving the two dimensional plane. Let's say this is an acceptable use. But I really like what you said, Alex, about do we build it up or from the small pieces or do we do we put these sneak sneak paths in the complex diagram? Yeah. So yeah, we're just almost out of time. But I feel like I understood about 85% of what you said, Alex, but I felt like you were saying that you like sub protocols as useful building blocks for a greater protocol. Yeah, I don't like to think about like individual messages within the protocol. But yeah, that's the part that I guess I was confused about. I don't like think about individual messages. But so I in my brain is having trouble making sense. Yeah, of you know, if you are breaking it down into sub protocols, what do you mean that you don't like to think about it as individual messages? Right, I like to think about them like a sub protocol and like not messages and stuff like even if like the protocol like issue credential simple is just like one cent credential like the cent issue credential message. Yeah, that's still a protocol. It's not just a message, that's a protocol. That's what I mean by that. Then if you want to send the cent credential with the act, that's a different protocol. That would be that would be taking this path. Or not. Well, I forgot to draw on that one for that guy. Would that protocol be these two messages together, the send issue credential and receive issue credential store payload, right? Like that would be one protocol. Yes. Another one can just be just the send issue credential, right? So if you're sending a potential can be a protocol. So the roles would stay the same, but there would just be a set of protocols that are smaller that you could put together in order to kind of do this entire flow. Yeah, I totally agree that that makes the most sense to me because the roles are always going to be the same. There's always all the research and verifier like those roles like you really never change. Like it's all like a one-to-one conversation for now. So I guess like we haven't dealt with whenever you have like multi-party protocols at computation, like when you have like multiple people and like you need to resolve multiple messages, that's like not just two parties usually. Like how do you go about splitting messages that way, right? Like that gets, I don't even know how you go about that, right? Yeah, that's a harder problem. Yeah, I like that point of view, Alex. Yeah, yeah, great. Okay, we're out of time. Great, great meeting. And yeah, we'll meet again next week and I'll try to attend the Aries working group meeting. Yeah, and AFJ meeting this week. Great. Good to see you all. Thank you for coming. Take care.