 Good day everybody. Hi there. Good afternoon. Good morning or evening. Let's wait one more minute or something and then we can get started. All right, let's get started. Um, Let me see. I think that automatic reporting has been stopped or. No, no, no, it is recording. Yeah. Okay. Perfect. So welcome everyone to the ocean. Jessica working group called June 1. Uh, I need to remember you to buy, but I let's go to conduct and interest policy. If you'd like to add yourself to the tennis list, feel free to do so. I'll post link again in chat. And then we can get started. Is anyone new here today that would like to introduce themselves or share what they're working on. Right. I think I recognize most names. So, um, We have quite a lot of things on the agenda for today to discuss. But first let's do some status updates. I removed some months that haven't been discussed for a while or like for past events. The errors by fault call happened this week and then does anyone have to have an update on that? No, I think on this week, there wasn't any by fault call. Erase call I can give a quick update. There wasn't discussion like the continued discussion on, um, Like, uh, like, uh, moving areas to develop a wallet foundation. Yeah, what Nick do, and I think that it mainly this time was discussed like the difference between agents and wallets. And what the, um, Like, is the difference important? Are we like having too much? Are we putting too much effort in the naming and maybe it doesn't matter that much. And like, there was a good discussion on like what is an agent versus a wallet. I think agents might more people saw agents more of like things that Like the protocols you use and the interactions you have all wallets is more for storing credentials holding keys that kind of stuff. I don't know if anyone attended as well wants to add something on this. Yeah, I like an example that someone gave that about the phone right there. We, we are still calling our phones phones. But actually they are computers. I mean, they're complete personal computers, let's say, because at this moment, as he said, the phone is just an app inside the, the mobile device, let's say, but we are still calling it phone and we are all fine with that right. Yeah, I think that we are very purist on the naming on the on what's an agent and what's a wallet but everybody understands and calls it a wallet. Even if probably it does more than just keeping credentials or keys. So maybe it's, it's okay if we, if we just use that name and, and that's it, which could be something that could be more in line with moving on. These areas effort to open world foundation. Yeah, makes sense. Cool. Did comfy to. I don't know what I know there's some PRs open and I, I haven't, I don't have a full update on like, what's the status so let me see there is test for did comfy to message packing and splitting the demo up into parts. So a short update is this PR ready to to merge. I believe yes I resolved all comments you left. There shouldn't be any issues now. And regarding the second PR for them. As we discussed last week, there is lack of demos and I instead of making general demo more harder in cases and options I decided to split it into two set of scripts. So old one is preserved as before, same actions and did convi one is used and for the convi to resist independent couple of scripts for Edison failure, which currently is doing just pink message. Okay, yeah, it's all like, because adding options, like to the main demo to the main screen that was getting too cluttered or something or what's the, why did you change it to two. I think it's most will be more clear. If you're interested in one particular feature and want to see how in action. You can just draw a single one. Okay, yeah. And so like it's it's based on the same foundation right so we add a feature to one demo, we can also easily add it to another demo. Yes. Okay. Yeah, sounds good. Yeah, I'll try to take a look at it as well with other PR. So they also wants to give it a review than PR 4070. Cool. Okay, any other things about the confi to you want to to to share nothing from I said. Okay. And if these two PR some birds, what do you say is the work that is left for this coffee to do you think it's ready to to be merged into main then or before I took the old branch and general kept general structure regarding the two versions of versions of message out of band changes which are not part of the standard. Okay, is it as it was in December, we can merge in main, but if you, if you have some better vision how to make connections and send message between. We can work on it and improve. Yeah, because the KU KU read quickly what the issue was with connections and out of bands. It's been a while. Just from the invite side you can create an invitation out of an invitation according to standard from another side you can just accept it and make instant object, which is in ready state. So you can send message from ZNV key to invite her only for now. And that's it. This is whole flow. There is no any kind of a good exchange between parties in the good content to specification. Yeah, but I think they're like, there's no need to do data exchange with the complete to write so what's the non standardized approach here. Yeah, you can just reply directly to the idea of the message sender. Yeah. So, sorry, I'm not fully understanding what is the, like the thing that's deviating from the standards, because that did come to spec defines like that out of bands invitations, and there's no data exchange but you can just create a connection directly based on on the ideas that you share. So if if I give you my date and you store that and you create your own date and basically we have a connection that we can create a connection record from, for example. So, what's the difference from what's defined in this country to expect. Just for right now connection will be created from the in wiki site only invite her will not be able to send message back in the current implementation. Okay, so when we receive a message we don't create a connection records. As far as I remember yeah. Okay, yeah. I think that makes sense somehow because if, like if you create the invitation you have your own did, but you don't have the did of the other party yet. Probably you could create like a, a record or you, you couldn't do it we probably needs have like changes in court that like you can work more often without a connection record and just do like this. I think that's the basic messaging which doesn't necessarily require a connection, probably. Maybe if you can. For next week if you could give a short presentation on how it works and what's available I think that that will help a lot in understanding like how it was left in November. Yeah, how the what the state is how it works possible and what is it with you be able to do that. Yes, I think I can. Yeah, perfect. Great. Awesome. Okay. Then let's head on to the agenda for today. Let's start with some things on giving update on J2T credentials and crypto support and open ID for VC support. Some things about the AFJ zero for zero release and the topic that we discussed last week and making it easier for people to get started with AFJ. So some thoughts about like rethinking the storage layer. Are there any specific topics people would like to see discussed today. Maybe not a real topic but just a quick question. Tim, I bought in the shared libraries. So, yeah, yesterday I tested fixed set burned published for an uncredged package. So I just want to clarify it. Do you plan to release such fix for other two libraries or I need to test something further. I've been released already. Yeah, thanks. That's great. So yeah, I will try all three libraries and come back to you with some results. Awesome. Cool. Thanks. Yeah, we intend to parent made all the updates and now it should work for everything. So one final thing we wanted to do is like burn it in the also in the bifold. The update branch run it. One final time with like the migration script and everything. And if that seems to work out to do that today still. Then we'll make the stable release for all the share components and then we make the AFJ zero for zero release. I could get to that. Yeah, for this point a bit more, but should be done. Yes sounds great. Thank you guys. Um, all right. I thought I would make some slides for today. For the topics want to discuss. Might be doing this more often some slides because I think it can help with, yeah, just giving some general updates for the work that's ongoing because there's a lot being merged. There's a lot of times and, and yeah, I think discussing that in some slides that I could also share in the discord makes it also easier for people that don't attend these calls to stay up to date. Yeah, so intention to do that more need to see how it goes in terms of like times and preparation with it. But so what we want to discuss for today is update on the JWT3C's extended crypto support, AFJ zero for zero release, making the easier people to get started with AFJ and retaking the storage layer. I also added a PDF of this presentation to the meeting notes if you want to see them. So, um, recently we've been doing a lot of work on adding JSON web token fairfire credentials, which is basically do which we see verify credentials that adhere to the VC data model. But in JWT format. So we already supported JSONLD credential support where we use link data signatures and JSONLD normalization and cannibalization to get the input for signatures and then have all the features that JSONLD has to offer. And now we also extended that with JWT VCs. And basically a JWT VC is a JSON web token, which under the hood is a JSON web signature. And with the JSON web token, you make a signature over a JSON object, which you then encode as a string. And that you can share. So the normal representation for a JSON web token VC is then also a string. And so I think benefit of JWT VCs is that it's simple. If you support JSON web signatures or JSON web tokens, you can basically create a JSON web token and there's a lot of JSON web token libraries out there. You need to support the VC data model and know how to map them. And there's some complexity there, but I think compared to JSONLD signatures. This is really simple. Because yeah, you're basically just signing a JSON object. Yeah, which need to adhere to a certain data model. So now in the W2C credential service, you can sign JWT VCs and you can also store them. And when you query credentials, you can query both. And they have like a format property so you can distinguish between linked data, proof credentials and JWT credentials. And there's a normalized representation. So a W2C credential class that can basically both. And then you have like a specific JWT credential class and the JSONLD credential class that adds the properties that are different between the two. But so you can, for example, want to build a wallet with it, you would use like the normalized representation, which means you have like a common interface between the two and you don't have to care really about what type of credential. You're working with. And it integrates with the JWT service in AFJ and that means that all the JSON web algorithms supported in AFJ and like the crypto wallet supports will be supported for JWT VCs. So currently with Oscar, that will be ES256 and AdWords Digital Signing Algorithm. For Indie, it will only be AdWords, Indie SDK, it will only be AdWords Digital Signing Algorithm. But like adding more will now be a lot easier because like you don't have to implement another signature suit as with JSONLD, but if your wallet supports the like the signing of the algorithm then that is enough. So I think adding more if you have custom needs that will be easier over time. And I think especially with the wallet refactoring that we want to do in the next version. It becomes even more easier to extend which crypto the wallet supports. By the way, if anyone has questions please please raise your voice I can really see chat or anything so feel free to interrupt me. So, how does it look like is, let's say you have a credential very simple credential here. So the credential that is issued by the example and that is issued to this dead key. You transform it to a credential instance, and this is like the prior representation that's common between the two, and then you specify the format. So JWT VC and you specify the algorithm that will be used for the JSON Web signature. So if you want to do JSONLD, you, well I use this one here but like there's a shared class. So that is just to see credential service where you can do a dollar format and then instead of an elk, or an elk, you provide the proof type which is then the signature so ED225 signature 2018. So the API changes are like the shared API between the two credential formats is, yeah, as much as possible but there are some differences between the formats which, yeah, we didn't abstract. Just a quick question. If we want to add JWT with selective disclosure, do you think that that would be a new format or another input field that would be like these fields are selectively disclosable or just a whole other design, or didn't you think about it yet? I think it will be a new format because for issuers and or like mainly for holders and verifiers to be able to understand the credential, they would need to understand selective disclosure JWT. So, I think it would be a separate format and that's I think that's what I see from, I think OpenID if they want to add it, it will be a separate format that you add there because there's just different. But it will be, it can lead heavily on the implementation we already have. Yeah, so we added the JWT key support. This is something that we needed for a project we're doing. This is really a very similar to the key. However, in this case you encode a JSON Web key into a well into a did not really into a document and based on that you can base 64 decode the did identifier and that will just have like a JSON object which represents a JSON Web key. The nice thing is everything that the JSON Web key spec supports can be encoded as it did, which means you could also support like X5 or 9 certificates if you want to do that. But very similar in nature to this key. So, if you want to look at the spec you can. Yeah, look here. So, then next is openly for fairfire credentials. This was already added a while ago by cream and it's built on top of serial OpenID for VC libraries and initial implementation support the JSON of the credentials and ID to 519 signature 2018. And it is now extended to also support JWT credentials and it supports the more more crypto types basically everything. Again, what the JBS service signature suit registry supports. So, yeah, when we make conditions to AFJ and like whatever key types and signature suits are supported that openly for fairfire credentials will also support it. And especially if we make the API for extending the wallet key types in the elk that you can support better than. Yeah, it becomes really easy to use it with crypto suits that aren't natively supported in AFJ. And still only support receiving credentials not issuing them. So it's like a client for receiving credentials. We want to add an issuer, but yeah, haven't had a need for it ourselves yet. And we'd need, yeah, we need to expose more HP endpoints and such. So yeah, yeah, needs more work. And support on a credit credentials. It could, I think they had it in the opening for PCI spec in the beginning but I think they were moved now maybe it can be added like as an optional extension, but I think now, if you want to issue on a credit credentials. Yeah, you should stick with it. And it also integrates with the did resolve a register so it can be used with any did method that is supported by AFJ, or that you register in there so you can make your own that method and it can work with the opening for fairfire credentials. So, quick example of how that looks again. Let's say you create a did JWK did what you can do then is. It's a bit of a long method, but you can call when you have like an offer from a QR codes. You have the QR code data, which is often open at the initiate issuance and specifies an issuer and this is like a very long encoded chasing object. You can request a credential and you can specify exactly what you want to support. And I think the tricky part here is that in the process of request credential there are. Maybe some negotiation or like matching happening because what algorithms and credential formats and did methods does the issuer support, and what credential formats and signature algorithms to we as a wallet support. And then there's also, maybe the limitation is that I as a wallet support more, but I only want to receive for example jdbt credentials. Here you can specify a limit like weights credential formats procession algorithms are used so for example only do it if he sees or only want to use efforts digital signing algorithm. If you don't provide this, it will just pick whatever the agent supports, and that can it will dynamically in for that so if you register your Christian signature suits, then it will take that into account when requesting a med step against what issuer supports. So one is to resolve a verification method for procession. And in the previous implementation there was a kit that you had to pass like a key ID. It's because there needs to have like that matching needs to happen. We basically need to make it dynamic, because, for example, the issuer could support at words digital signing algorithm. I couldn't use then a, a date that uses key type P 256. So if I were to give this did a prompt it will throw an error because there's like a mismatch. But this allows to do is basically it asked you to provide dynamically the did or the key insight and the document that you want to use for this exchange, and it tries to give you all the information you need to do that so it gives you a credential format, which can be data with TVC link data proof VC. It says which algorithm that will be used so that can be at what's digital signing algorithm or yes to five six, which fabrication methods could be associated with that so that this if you have either 25 or like at what's digital signing algorithm, you can use an and an ID to 549 verification key 2080 or an ID to 549 fabrication 2020 or a Jason Web key. So there's multiple verification methods that you could use key type. This is like which credential the issuer will issue. And it also has like a support that did methods and it could say like I the issuer can say I support it. Well, it can also say I support all the methods. And based on this you can then return. Alright, I'm going to use this key. Yeah. So, that's how I use the opening fissure client. Yeah, I hope to build it out. More over time. And currently, and we're expanding the work to also support side of the two and openly for VP, and that will basically allow us to share and prove credentials. Using these protocols and that's under the hood that uses the different presentation exchange. So currently we can receive the credentials but there's no way to share them yet. And this will allow to do that. Yeah, in general works by the other parties since naturalization request to you as a selfish should open the provider, the wallet response with a verifiable presentation. And the verifier checks that. Yeah, I'll, we can do a bit more of an in depth session on like how the side of an opening for VP work and exactly the implementation AFJ. Still working on that but quite complex specifications if you look at the presentation exchange that you need to understand opening for VP and side of feet to so it probably weren't a presentation on itself. Cool. So, then next, is there anything, anything on like the updates on side of opening to see the overseas. We move on. I guess just a general question about what the time horizon for those features looks like relative to the AFJ release cadence. Yeah, so the opening for PCI with JWTV PC support is already in May and it will be released with zero for zero style feet to an opening for VP is we're now starting to work on it, and we needed quite soon for some project, but as we're still quite an experience with the specs, we're first going to create a implementation, see how it works and then we want to contribute it to AFJ that implementation that we build can we we can share sooner already, but yeah, we want to make sure that the implementation is correct. But I think. So, let's say, in a month, we definitely have an implementation. After that, we will start looking at contributing it back, but I think so somewhere between one to two months, this should be merged into AFJ. Beautiful. Thank you. Cool. Then on the AFJ zero for zero release. Just a general question I think when is it going to happen. Well, be already mentioned that updates to all the shared components have been finished now released for all the different areas Oscar in the PR on a crash rest seems to be working. There was a hell of a job but I think the, the, the answer is around now. As I said, we need to do the final test in bifold, but I really think we can get it released this week, and I'll do my best to do that, and then afterwards we can look at further improvements to, yeah, everything I think we can do patch releases, etc. So, I think we can make the zero on release of the shared components today after I've tested it for the final thing in react native. Then we'll release AFJ force for zero if the versions have been updated to a stable version. I'll switch the documentation to make zero for zero the current version instead of the next version. And there are still a lot of things to do, but we can do that with incremental patch and releases. So some notes, I want to make about zero for zero and curious to hear input about this, but I think to make the updates that we can make and have plants go the smoothest and not have issues for features that are quite new. And I think there are some features that I would want to mark as experimental and artists not subject to breaking changes and sample releases and that if you use them, you know, like you're using an experimental feature. And one of them is implementing your own Anacred registry or the Anacred service implementation. So using the default implementation provided by the Aerosmith-Jascot repo, so based on the Anacred service and also like all the registries that are implemented in AFJ, which are the in the SDK, in the VDR and checked. I think those are covered because also if we make breaking changes, we can just update them internally in our repo and that won't cause issues. I think external implementations we can probably wait for a bit. So we have some time to stabilize the API. I think all the shared component libraries, although we have done like a lot of work to make sure it works on all platforms like I'm sure when we're going to release them, there are going to be issues. We also have some issues still with Node 16 and need to have a patch. So I want to mark them as experimental. And also the opening for VC Client and JBTVCs because they're just landed. And I think there will be some issues that may cause breaking changes because it's just very new. And it doesn't say you shouldn't use them, but know that like a 041 release could introduce potentially a breaking change. All right, Beren mentioned something to make the experimental, so that's good to hear. I think we should probably add the section to the docs somewhere that like these features are experimental. Don't expect them to be covered by the semantic versioning. And I think not to Tennessee also, we had it in previous version as well in the works, but it isn't fully integrated with the update assistant and take some more work. And I don't want to delay the release of the next version just because of this. I think we can fix that after the release. So, some other notes is the Indie SDK to ask a migration script has limitations, and that it is mainly focused on folder verify role migration reignited. So it does work in Node.js, but issuer records are not migrated, which is an important thing to know. If you don't have issuer records so a mediator for example, you can migrate from Indie SDK to Oscar, but if you have created schemas and credential definitions, etc. Those are not covered by the migration script. Yeah, I think we can easily add that, but yeah, we haven't had time to do that yet. And multi Tennessee is not supported also yet for Indie SDK to ask for migration because we don't have to do just one wallet but a lot of wallets that need to be migrated and it requires extra logic. If you need one of these please let us know and if you want to contribute that also please let us know I can, we can provide some guidance on like what needs to happen. Yeah, there needs to be some time investment to get these things working. But I think most people are using FJ with Indie SDK reignited right now for holder slash verify a wallet so I think that use case is covered and if you're using it for mediator that use case is also covered. So final notes on the FJ 040 release questions remarks. Yeah, I have just a question about the. You say that that the experimental features are not part of the same version. How are you marking them as experimental in the in the commits or or is a concept. Do you have a good idea I think adding it to the documentation, we could probably add it to the classes or something I don't know if this code has good hinting for that. But yeah. Don't really know. Okay, okay. Okay, so we have to do some some research on it. Yeah, I just don't want people to expect that like all these new features are going to work perfectly out of the box, and that we need some time to iterate on them to get them working properly. Yeah. Yeah, I think we can add like just dog experimental I think we should update it, add it to the migration guides. We should add it to the installation dogs, like just add a lot of pointers about like when you see when you are installing or upgrading you get pointers like hey these features are experimental. I think for modules, we could probably make them. I think it's an experimental thing, but I think that only works if the module as a whole is experimental and some of these are like parts lead experimental which for example java TV sees. Maybe you can add like a trace log in the constructor when it's created to log. Like it's an experimental module that is enabled or something for service. Good. Yeah. Adding logs, like debug logs, or something or info logs, like our name is an experimental feature. I also had a question regarding the migration. Yes, I assume some agents in their configuration they have the auto migrate on startup now on Node.js and if they update to zero for zero. What, what will happen will just say like we can't upgrade you to to ask our or will it's not even try. The auto updates towards on startup is separate from the car migrations. Which you should know because you wrote it. Yeah, remember now. Yeah. Okay. Perfect. Yeah, so I think that is a bit of a clumsy experience that you have to do them separately but I think like migrating from a very specific database type to another one is just not something we can integrate into the core of data system without like it's way too complex. And this is like a one time team. People have to do. Yeah. I think that this covers by the by the docs already. Also, if I remember now I think if you tried to run it in Node.js and just error sports. We had to enable something for tests. I think we warn, because Ariel also wanted to work for example for his mediator. So we provide the warning I think. Yeah, yeah. Oh, I actually do think if it detects a credential definition records it restores and it's chosen error. It's like a, an issuer and you try to update it it won't if you have any records it won't change it won't doesn't support yet well. Okay. Interesting. Yeah. Okay. Question in chat is when there are plans to update the extensions like push notifications. Yeah, I think there are full requests open already in. Let's see if I can to update those I know it more it's on the call more it's which you made updates for some of those packages already right. So push notification and redox store. Okay, yeah. Okay, so the, not the react hoops yet. There's a volunteer to do that. I think. Yeah, that would be helpful. I think the changes between them are not so much because we are using currently a fj zero for zero with the old react hooks version I believe, and that works just fine besides that we get like a warning of the incorrect version of a fj but like there's not a lot of breaking changes to the API so I think with a small version of days that should be covered. Okay, let's continue. I'm quickly going to skip by this topic because I want to get to the last topic. And I'll pick this up except or see if we get to it at the end. And one thing I want to discuss is, if we can maybe rethink the storage in Arizona transcript this has been a topic. I've been thinking of lately and they also had some discussions with Ariel on storage and limitations that it has and in some cases what are like the complexity that we have in storage right now it's not necessarily necessarily needed. So to give some context is that by default fj. used in the sdk has storage layer. Now we also support Eric's Oscar in the latest version, which is basically the storage from in the sdk nearest Oscar very similar in their nature. They store an encrypted block of data by get by category, for example, connection record and an ID, which in our case is often a UU ID and you add tags, which are also encrypted so you can create a record. So we can find records based on a role or certain threat ID and an exchange. And it also supports an encrypted text to be able to advance queries which you can do greater than less than live. But those are currently not sport may have j so you're really restricted to the tax the encrypted tax to query things. And you also need to specifically define the tax on a record so if you have a property that you didn't add as a tag, you can't query on it until you like at this attack which often means you have to like retrieve the record decrypted restore it and add the tax. Some of the limitations that we have like with this is that it being an encrypted block basically we Jason stringify the record and then store the string that is then encrypted and stored as bytes database. And that makes it hard with concurrent processes modifying the same data. I think one example is like incrementing counters. So if we want to create replication we need to keep track of like what is the current index that is used. And I think in a lot of like cases you could do. You can plus one in a normal database and it will just add one to the latest counter and so if there's multiple things right to it that that will go right but if you do it on like retrieve the whole record and and then you override basically the whole Jason record. And so there's multiple processes reading and writing from it that will cause issues as per as per support locking which helps in this case, because you could log while you are going to update it and you want to increment counter you will lock the record and say like hey nobody touch it until I'm done updating and then the next one can update the record but that doesn't allow for concurrent work it mostly allows for sequential work where you can be sure you're not like conflicting with each other. So it's also like a hard to do selectively updating value so instead of like saying this value, I want to now be this is, you need to override the whole record so if you read the record in the meantime one values updated and then you write the record again that you override that whole data. So you really need to to look data, which means you can do any concurrent processing on records we have in AFJ. So, another limitations that like there's a whole custom encryption layer built in with querying and everything. Well, most. Yeah, databases already support encryption out of the box. So, it should be already reinventing the wheel but I apparently wrote re implementing the wheel. I'm creating a custom encryption layer with database. Well, there are already solutions out there that are probably solving this in a better way, because the main purpose of database is to be fast and and provide all the things you need from a database. So migration in the current model like migration of records is hard, you need to retrieve them decrypt them update and re encrypt because you can to like, you can do migrations say like okay all these values now need to change so we have a new column that needs to have this default value. So that adds more complexity. I think there's nothing that there's no good way to do sorting and pagination, because everything is encrypted. And if you want to sort by date, for example, there's really no way besides retrieving all records and then filtering them locally, or you need to use an encrypted text, which would work for this. But in that case, you need to know beforehand, which data you're going to sort on, because you need to have that tech available and otherwise you still need to retrieve all records restored at that tech, which like it's can be a complex process. And I don't bet you nation is supported based on offset and limit, but there's no good way to handle with changes in the data set that you're querying so if new records are added, then offset and limit don't always provide like a good way to retrieve data. You can sometimes use cursors for or other things, but that is not supported so you always need to take into account that the data you're querying you might get duplicate records or you might miss records when their records being added and removed in the meantime. So some of these limitations have briefly been like Ariel send me this today and I felt like it was an interesting take on basically why this model works is because the assumption was that you would always have records in like where there's two to three items not hundreds or thousands which I think for wallets in most cases make sense. But even if you have like if you have like want to build a chat application and you're going to have a lot of records which you want to sort when then you all have to retrieve them in a database. So I think, especially in cloud environments. This doesn't hold up at all because there's a lot of records. Let me see does creating also become a limiting factors record numbers increase yeah I think in general with database if you have more rows. Yeah, it becomes slower. Because if you need to sort thing in database and such you still have to like database needs to know about all the records in the database. And I think what doesn't help in the case of for example areas Oscar is that all data is stored in a single table where you have like a category and an ID so you don't have separate tables for separate records which means that if you're carrying for a certain category. You're still carrying from the same table and I think normally you can create indexes on specific rows that you query quite often that makes those queries quicker which I think is like also hard to do in this case because you're storing you're you have you have a very limited database model because you just need to account for okay we have like string fight. We have tax that we can index, but like the data model is not her representative of the data so I think that doesn't help, but yeah it becomes slower when you add more records. Yeah, so I think we're almost out of time but I can go quickly. So, I think it has been raised in the past already I think maybe by warren was that you about using normal databases. It could be I don't recall that. Certainly there are like encryption extensions for SQLite in like typo rm and things like that. Okay, yeah, I think it was one of the first call you joined you, you raised this. This is a top that hasn't really left my mind since. But like, yeah, can we use normal databases and optionally leverage the native encryption features that database provides, but that also makes it optional so if you really need performance and for example if you have a mediator where the messages you q are already encrypted or like there's not that much reason to encrypt data when it's not highly sensitive information I think you maybe don't meet encryption and I'm just talking about the data here like I'm not talking about escar storing the keys. I think that's another problem we should solve but like storing data separate here from storing keys that you use for encryption decryption siphoning et cetera. So for example, SQLite has a native or not a native SQLite has an encryption extension, but also for example cloud providers. This is what Google Cloud mentions on their pages that cloud as to our customer data is encrypted when stored in database tables. So it's encrypted by default if you use Google Cloud so when you use Google Cloud with Oscar, you have basically double encryption on your data and ensure like this date this encryption is different because with Oscar you can have different wallets but in a lot of cases. I'm not sure if that is really needed. And so some of the benefits that we can get is we can use all features from normal databases, what they have to provide better support for migration of records and normalization of data we don't just store like a string of traditional check that we can really look at like what should the database structure be better sorting filtering based on like statements we can have better pagination based on cursors. And probably better performance is we're building on top of the database that optimized to be fast. I think both in the case of using an unencrypted but also using encryption features from a database. So, I think I have a question, I guess, which is, you know, about the scope of what you're attempting to the scope of the problem that you want to try and solve is the database we're talking about here simply for like, like key management and credential storage, or we talking about application needs above and beyond that as well which AFJ wouldn't necessarily know about itself like we're trying to provide an abstraction for all of that, or is it just, is it more constrained. I think more constrained to the agent feature. So not KMS or like key management necessarily I think that we should separate it more and solve that in a different way because I think there's different needs for them. But I do think like having the storage model and making it more pluggable like I know for example for ammo has like for every type of records you can, you can decide what type of database you want to plug in so if you have like really high volume records you can like make your own implementation for it that some are maybe stored in a reddish cache and others are stored in a database. So I think we should be more flexible and providing more to like traditional ways of storing data, instead of like having your database and wallet needs to be encrypted using this model that is like that is like invented here in in this ecosystem. Yeah, I think that's the main thing I am getting it. Thanks. Alright, so I think we're out of time this was my last slides, probably. Yeah, either way so please maybe for next week. Think about it. And also like what are ways we could support this in AFJ I've added some things here so maybe we can continue this discussion next week what people think about it and and like, if we want to do it like how can we do that in like the different ways to interact with the database like our own domain specific language like Chris Maurer, just a query builder. I think it could all work. I think we should support multiple back ends. So next week I will represent on the wallet API. I needed some more time for that and if you have anything please let me know in the discord or wherever you can reach me. Cool. Thanks everyone, and I see you next week. Thanks. Bye bye.