 I have a meeting of Hyperledger Latin America at 10, it's 9 o'clock now for me. So I'm conducting it so I'll leave this meeting 15 minutes before apologizing advance. No worries. Thank you. Yeah, if you have questions, feel free to ask them before you leave. Thank you. Shar, the live stream is on the backup recording is on. You're good to go. Great. Thank you so much. Welcome everyone to the identity special interest group call for March 21, 2024. Thanks for joining us today today. We'll spend spend a few minutes at the beginning and going over working group status updates and then we'll hear a presentation from Colton Wilkins on making Web sockets highly available with socket doc. Looking forward to that. Since this is a Linux foundation call, we are following the Linux foundation anti trust policy as a hyper ledger call as well. We are, of course, following the hyper ledger code of conduct, both of which are linked and written here. This calls, of course, being recorded and I'll post it here at the top. Later today, if you'd like to put your name down on the attendees list that would be wonderful. Let's see it looks like a lot of familiar faces today, but if anyone is new or rejoining the space and would like to introduce themselves, feel free to hop off mute and say a few words about what you're involved in in this space and Colton feel free to introduce yourself now or or before your presentation, whichever you prefer. And if anyone has any announcements as well in addition to any introductions now would be a great time. I can't raise my hand and I did not add it to the announcements but on Monday of this week the open wall foundation had a workshop with the EU, EU DI wallet consortium. It was about two hours it was really great. We had some technical difficulties with zoom so we're not going to post the video until next week, but that will go up then. And if anyone's interested open wall foundation also had their first anniversary last week and a blog post with a logo in the deck and some of the links from that gets posted today, you can find that at open wallet dot foundation and there's a link for the blog and the header. Sure. Great. That's awesome. Yeah, thanks for that update. We also have IW coming up next month if anyone on this call is going and wants to share anything about what they're planning to talk about or what they hope to discuss there. So we're going to be here as well and then we have a link another April event, zero knowledge proofs and zero knowledge programming in blockchain application development with a link there to register. If you're interested. All right, any other announcements or introductions before we jump into things. Just wanted to introduce myself. Hi, I'm Flores, co-founder of entity that were a social impact startup based out of Los Angeles, California. This is my first time attending this, this meeting, although I've been a follower of the recordings for quite some time. I'm looking forward to meeting everyone and today's presentation. Absolutely. Thank you so much for joining Jorge. We really appreciate you being here and as a note to everyone else on the call. Jorge will be our next speaker in two weeks and we'll get to hear about entity dance work so really looking forward to that as well and come by in two weeks for that too. All right. Anything else to discuss or announce before we jump into the agenda. We're moving on to some working group updates again just to make sure we have enough time for the presentation. I'm going to go pretty quickly through these but feel free to jump in with any details you'd like to add or at the end if you want to. Jump in and let us know anything, anything more specific about any of the groups or projects that you're involved in that would be very appreciated. We'll start with the Indie contributors working group. We met. I looks like I forgot to update the link, but we, we did have our last meeting last week and we discussed the Indie related projects for the hyper ledger mentorship program making sure there's no overlap between ongoing work and work that we will have. Mentories and mentees work on together. The Indie Bezu work is ongoing as well had a brief discussion about that work and the Indie Bezu did method. So that, that is ongoing in the hyper ledger areas working group. They've been discussing. Did come be to credential protocol updates. That's looks like that's their main work update. In the areas by fold working group looks like they're working on the transition to the open wallet foundation as well as dependency injection. In the ACA bug areas, cottage and Python user group. We got a lot of status updates on the non creds RS work. The did peer and after interop work. Moving up updating an acupy deployment to use an on creds RS. Moving closer to the acupy 1.0 release and working on the acupy 0.12.0 release candidates. All moving forward acupy and maintainers meeting as well. Similar similar goals and things going on just much more focused on PRs and issues. Let's see. And then in the hyper ledger a non creds working group. Let's see. There's we got status updates on the non creds in W3C verifiable credential data model work. And discussion about the revocation manager for Alasaur and non creds Alasaur is a cryptographic accumulator used for revocation. And they discussed as well the schema objects for complex JSON objects. Let's see. So that's a super quick overview of the hyper ledger working groups that we track. And there's a note from Lynn. Thank you. There's so there there is also work in hyper ledger Indy moving towards upgrading Indy node to open to 22.04. So that work is underway. We think that that process will be much smoother, ideally that the move from 16.04 to 2004. Since the pipeline is in place, but we'll continue giving updates on that. All right. I moving on to the trust of our IP foundation. There were several groups that I wasn't able to find more recent meeting notes for. So if you are involved in any of these, feel free to jump in and give some more details on that work. Let's see the governance deck. They had a recent meeting in March. They were joined by, let's see, drawer grew of itch the founder and CEO of velocity networks foundation who presented on their governance process. The technology stack working group they have a bunch of efforts going on in different task forces. And I won't go too deeply into all of these since it's all linked here. But if you're involved in any of these and would like to speak up, feel free to do so. The ecosystem foundry group has been working on an ecosystem components paper. As well as proposing a third generation to IP stack diagram. And then the concepts and terminology working group. Meeting is working on a terminology governance guide and the spec up coding project. There was an all members meeting last week where if you're a member of two IP, you can go get a. Oh, you put the link in there that has an update for everything there. If you want more details. Awesome. Thank you. Right. So very quick overview of the trust for IP. Groups that we track in the decentralized identity foundation. The diff did come spec working group they meet on the 1st Monday of the month. Let's see. They've been working on, they've been discussing advancing did come to the. IETF as well as the layered protocol for enhanced messaging. And did come users group. I know they're meeting regularly on on the. The other Mondays, the 3rd, 2nd, 3rd and 4th Mondays of month. I wasn't able to find. Meeting notes from them. So if anyone has been attending those meetings, feel free to give a quick update. I'm the co-chair of that meeting. So I'll go ahead and I'll drop the link and chat to the agenda. And that's basically where we keep our meeting notes. Great. Thank you. Awesome. I will add that to the meeting page. I appreciate it. All right. As well. I wasn't able to find too much activity in the diff interoperability group or the special interest group. And the CCG. So if anyone has any information on those, feel free to jump in. Thanks Colton for that link. And then we also have a link as well on the EU ID wallet reference implementation. So check that out if you're interested. All right. That was very quick breeze through high level overview of what's going on in the working groups that we track. But if anyone has any other updates before we jump into the presentation, that would be awesome. It looks like you might have the older all members meeting for trust over IP. There was a new one yesterday. So I suggest updating that to yesterday's meeting. If you have a link or the members meeting. Was that that was posted on the main meeting page? I'm not sure. I might have a link in an email. I'll put it in. Oh yeah. Yeah, I wasn't able to find that when I was preparing the notes yesterday. So if you're willing to send that to me, that would be wonderful. Great. All right. I think we're ready to hand it over to Colton for your presentation. All right, let's see if I can share screen. Can you see the slide deck here? Yes, looks great. All right. A quick introduction before I begin. Hi, my name is Colton Wilkins. I work for in DCO on a bunch of different open source projects. And as I kind of touched on briefly earlier, I am the co chair of the did calm users group meeting, which did calm is what a lot of the Aries projects use under the hood for their communication. And speaking of using or things being used under the hood for communication. Today's topic here is about socket doc, which its goal is to make web sockets highly available. On that topic, what is what is highly available? And I just kind of want to go into this briefly here real quick, like what does it even mean? Well, according to this LinkedIn post, which I thought was absolutely wonderful. John Bonso from LinkedIn describes high availability as something it's the goal is to for your application to be able to run 99.999% of the time effectively. And what this means is that the five dines there, if I remember correctly, means that throughout the course of an entire year, your application can only be down for like five minutes tops, if I remember correctly. But the goal there is if the system, you know, becomes overwhelmed over burdened, something crashes. It can just recover quickly and there's no perceptible downtime really to end users and he goes on to extend, expound upon that, talking about fault tolerance where the goal is to have zero downtime. And that's kind of the goal with socket doc as well, but I don't. There's so many other pieces that it, it's not just having socket doc will make all the problems go away, you have to make sure that your application can do things as well. But why high availability is important is because if an application crashes, receives a massive influx of users. Your users just won't be able to access your app because more users mean slower services, slower services turns into angry or angry customers, or, you know, even down services right. And then angry customers equates to less money that you end up earning. So why do we need to worry as both decentralized identity and variable five with credentials, it's a very new technology in the grand scheme of things. But as they both gain more and more momentum or more popularity, we will start to see more and more users using the technology. And this will cause a bunch of burden on the servers because just imagine, you know, the Super Bowls here or people are going to see a new Disney movie that's coming out. Everybody's excited. They're, they've bought their tickets. They're ready to go the day comes, and they go to, you know, enter the stadium for the Super Bowl or they go to enter the movie theaters with millions of people getting all excited about this. And everybody going all at once. All of a sudden, the servers go down because they can't handle the mass influx of people with without having these fault tolerance systems. That's what's going to happen. The servers are going to overload. They're going to receive so much traffic. And if people can't get in to see, you know, the new Disney movie or they can't get in to see the Super Bowl because their applications are no longer working. They paid real money for these things. They're not going to be very happy at all. And they're going to lose trust in the technology because it didn't work for them. And this is the problem of scale, because as the number of users increases, so does the demand placed on all those servers. A single server, like you can add more hardware to a problem, make a server more powerful, but that can only take you so far before that one server will end up coming crashing down anyways. And you're spending astronomical amounts of money on a brand new server. But if a single server just doesn't work, you can add more servers and that allows you to scale up your application to more users for a lot less money. And this works wonderfully for the way the standard web works. You know, how Amazon works, how Google works, this technology here, it works wonderfully. You add more servers, nobody notices. On the user side, nobody notices because everything's working. But sometimes you'll have stream type of services where, like say a video is being streamed to the user in the case of something like Netflix. Or you're having real-time interactions with someone. This adding more servers doesn't exactly work so well for those kind of connections, because if one of the servers goes down, the users who are using that server, they still lose access. Or their connection gets cut off from the server when the server crashes or something horribly goes wrong. Going to, right here, I've talked about both web requests and web sockets. The web requests, the basic flow of those is you have a user and you have a server. The user sends a request to the server asking, hey, can you give me some data? And the server responds, here you go, here's your data. And once the user receives that data, they'll maybe be like, oh, hey, I wanted to add the item you're showing me to my shopping cart. So then the user will send another request to the server. And the server will respond back saying, hey, it's been added to your shopping cart. And so on and so forth. What doesn't work in this workflow is the server, if it notices that the price changed on the items in your shopping cart, it cannot send you a message saying, hey, the prices in your shopping cart have increased or decreased. The server has to wait for the user to ask the server, hey, can you show me the updated info? The server can't just push that data to the user. Web sockets, on the other hand, work differently in that the user will set up a connection to the server. Because previously for each one of these, every time the user sent a message, it would create a connection, receive a response, and then the connection gets closed. But with Web sockets, the user opens up a single connection, and that connection can be used for the server to start pushing data to the user whenever it wants. And then the user can also send back data whenever it wants. So we're not in this sort of bottleneck situation where we have to wait for one person to send data in order for us to send them data. We can send data back and forth at will. It doesn't really matter the order of operations there. The user just has to connect to us. So when we start scaling up this architecture, typically you'll have a bunch of servers sitting behind a load balancer. And what the load balancer does is when the user may say a request for data, the load balancer says, okay, who is free? Who's not busy? And then it'll pick a server and say, oh, hey, you're free. You're not doing anything. I have a user requesting data. Could you please provide an answer for them? Then the server, after processing the request, it can go ahead and respond back through the load balancer on that same connection. And the data goes back to the user. And you can have multiple users. And the entire process works the same way. User see here their quest data load balancer finds a new server or finds a server that's free. And that server responds back with whatever the user was requesting. And sometimes when a user makes another request, like they've made the first request here and they got back data, but sometimes when they make a request to the load balancer, it may not even go to the same server, it may go to a different server, which, you know, that other server can go ahead and respond back. Typically, in these sort of scenarios or setups, you'll have a database. And that database will be able to sync the information between all of the servers. So when you add an item to your shopping cart on server A, server B, when it gets a request asking what's the status on my shopping cart, it can ask the database, get the information that was set on server A, even though the request is now on server B. But what if we want to send live data to our users, like that situation where I mentioned where we want to tell the user that the prices have updated in their shopping cart? Because the thing is, the HTTP request that I just showed, it's not, those web requests are not able to push data at will back to the user. It relies on the user to ask first. There are two rounds for this, but they are very inefficient. Things like long polling or where the user just asks all the time, like every second, hey, do you have any updates? Hey, do you have any updates, even if there's no updates? Very, very inefficient. The web sockets that I talked about earlier, they allow for bidirectional communication because they are opening a persistent connection to the server. This is different from the web request because the web request is closing the connection after a response. And now, since we have this persistent connection, the server can now push data to the user because of that bidirectional pipe. So here we have the same sort of scenario. We've got the users on the left, the servers on the right with the load balancer in the middle. User A is asking to open a web socket to the load balancer, and the load balancer forwards that off to a server, and now they can exchange data back and forth. But in the case of, like, say, a chat application where if another user wants to send a message to user A, or even if it's the same user, like, they have their phone now, they added stuff to their shopping cart on their computer, but now they're on their phone, they try to add stuff to the shopping cart. When they set up the connection on their phone, it might end up going to a different server instead of the same server that their computer opened the connection on. So when they add an item to the shopping cart, they can't get the updated list on their computer because of the fact that these are two different servers. So when the server tries to send a message to the computer version or, like, user A, it sends out that message back through the load balancer, and what you'd want is you'd want for that message to go to the first connection here. But what instead happens, since you added it on the second connection, it goes back to the second connection. It doesn't go to the first connection because it's connected to a different box, and the servers don't know which box the other connection's on. They don't know, like, who do I go ahead and send this data to because they only know about what's connected to them, and that is sort of where SocketDoc comes in, because SocketDoc preserves the connection to the user, similar to, like, what WebSockets do. It takes all of the WebSocket messages and turns them into WebHooks, basically those normal Web requests. And they package some data into them or into those messages that if the server's storing a database, it allows any of the servers to send a message back to this user, not just one server. Now, I was mentioning metadata being sent to the servers. This is an example. This is, like, the format of the messages that are sent from SocketDoc to a server. We have this meta portion up at the top, which contains the connection ID of the user's connection, and it also contains a return path. So if you want to send a message to this particular user, just send a Web request, a post message to this return path. That is what the send here is. And then the message portion is the message that is being sent to the server from the user. And so this right here is a more real-world example of what one of these messages might look like. So you've got a connection ID, you've got in the send address, and the IP address of the SocketDoc server, you've got the connection ID in there as well. But that's just the path that needs the Web request need to go to. And then you have, under the message here, we just have some data that says, hello world, but it can be whatever needs to be sent. Like, please send me some new, like, updated prices, or please add something to the shopping cart, whatever it is that needs to be sent to the server. So let's see how this works with the same sort of scenario. So SocketDoc, I didn't showcase it here in this slide, but there would also be a load balancer per se in front of SocketDoc, and you have a bunch of different SocketDoc servers. User A opens up a WebSocket connection to one of these SocketDoc servers. And this SocketDoc server attaches that metadata portion that says, here's how you can reach back out to me. And here's the message from the user. It sends that information to a load balancer, which then figures out which server is free. And if one of these servers crashes for whatever reason, the load balancer might be able to figure that out and send it to a different server. But if the server goes down, the user's connection does not die. It's still there, and they can still send messages, and it will still go to servers. When the server wants to respond or send any sort of messages back to the user. Whoops, I forgot. I forgot how the order was on this. My apologies. This works for multiple users too. So we can have a second user here, which establishes a Socket connection to SocketDoc, which then sends it to the load balancer, and the load balancer picks another server. So let's say that user C here, the third user, or rather the second user who established a connection, they sent a message to the servers, like a chat message, that was supposed to be destined for user A. In the previous model, where we didn't have SocketDoc, the server C here could not communicate back to user A. It could only communicate back to whoever sent the message. However, with SocketDoc in place, because we are attaching the metadata information on how people can communicate back to the user, if the servers take that metadata, that return route, and store it in a database, then, even though the connection went to server A, server C here can look up in the database and say, oh, hey, I know how to communicate back to this user, and it can send a message to whichever SocketDoc instance they're connected to. Since that SocketDoc instance has a persistent connection to the user, they can just send the message directly to the user. And the way this is all set up, because it's all in that metadata information, because any server can send a message to a user, when a request comes in, that request could be processed on a completely separate server. Like all the work can be done on a different server. And then that server, once it's done dealing with the data, it can send a message to whichever user it needs to, because of that metadata, and the persistent connection is always open, and if servers crash, it doesn't kill the user's connection to the server. And this sort of allows... This allows servers to crash to do whatever, but the perception to the user is that everything is working fine. And if a server crashes or whatnot, but you have another server pick up the task, it might seem a little bit slow to the user, but the system never goes down, according to the user's perspective. And so that is about all that I have here. Does anyone have any questions? There's a question in the chat, Colton, and then I have a question after that. Okay. Is the connection ID used anywhere other than in the WebHook URL? So we're talking specifically about this. For the most part, it is only used inside the WebHook URL. However, having it separated out like this can make things a bit easier, or not easier, it gives flexibility to the server in that when the server is storing this metadata information, it can store the connection ID as the key, for instance. Instead of storing like a URL as a key. And that's sort of the reason why connection ID was also included in here, is to provide that flexibility, so it's not just only sending a URL. Does that answer the question? Yeah. I have a follow-up back in the next presentation. Okay. So Lynn, you had a question? Yeah, I'm not sure it's within the scope of your discussion today. And then just mention it if it isn't, and that's okay. But my curiosity is, for security purposes, for example, connections need to time out, right? And so on a connection, for example, if you're in an easy scenario, maybe, is a chat app of some sort where you're chatting along, and then you have to leave for a while, you come back, the connection is usually needing to be removed. You don't want to leave that open for security reasons, right? What happens in that scenario for, say, SocketDoc? Do you have to restart from scratch? Get a new connection? How does that work? So if the Socket dropped, if you were to drop the Socket, per instance, yes, a connection would have to be re-established. In terms of to answer the question about timing out, however, that is definitely more in the implementation details here, per se, but if a user disconnects from SocketDoc, there is a disconnect. It can send out a notification to the back end that there was a disconnection that happened for the user. And vice versa, I believe, or was there not? I don't recall if there... I need to look in the code here real quick. API. So there's a send URL. There is also a disconnect URL right here in which the back end can force a disconnection to the user. If the back end detects, oh, hey, they haven't messaged me in so much time, the servers are able to send a message saying, hey, disconnect this user and then kill the connection. Does that make sense? Yep, I think that mostly answers my question. Thanks. Back to you, Kip. Yeah, that was kind of the follow-up question I had was, what happens if SocketDoc itself goes down? How are the connections established in that sense? I see that the code is now online, so I can kind of take a look at the implementation, but is it meant to be stateful as opposed to, like, the servers behind the load balancers? So, SocketDoc, because it is opening up this persistent connection, it kind of has to be stateful because it has to hold on to that connection. If SocketDoc itself goes down, then all of the connections there sort of just get dropped, and if a server tries to send a message to them, servers down, it can't send the message, and for any user who's connected, they would have to re-establish connection, and once they do, the servers would be able to know that, hey, like, assuming that you're associating some other information in the Socket messages and whatnot, but the servers would know that, hey, another user connected. Yeah, well, that answered my question. So, basically, SocketDoc is stateful, which allows connections to be re-established in itself. I'm not sure if I addressed your exact question there, but... No, no, that does. It was essentially a follow-up to the other questions. What happens if connections get dropped not from the persistent perspective, but from SocketDoc's perspective? Yeah, and the idea there is if the user drops the connection, SocketDoc notifies the servers. If the server wants to drop the connection, they can also do so. Thanks for the presentation, by the way. Mm-hmm. Brett asked, is the connection ID a unique identifier? So, the connection ID, it is a unique identifier per server. That does raise some questions to me, so you might just want to... So, whenever a connection ID is created, it is created as a UUID on a per server basis. So it should be... I don't remember which mechanism of UUID, because there's different varying mechanisms for UUID, and some of them rely on hardware identifiers. So, in theory, the connection ID should be unique, but it's guaranteed that they're unique on an individual server, because right here you have the multiple different SocketDocs. It's guaranteed that they're unique per server. I would have to do more in-depth research, but I'm pretty sure it's unique across the board because of the mechanism that's being used to generate those IDs. So, a general identifier? Yes, basically. Are there any other questions? All right, thank you, Alfonso. Hope it was good. Absolutely. Best of luck to you. Thank you so much, Colton. Thanks for a great presentation. That was super interesting. Unless there are any other questions that anyone wants to jump in with, just a quick note that, as I said at the beginning, join us for our next meeting in two weeks for Jorge's presentation. It's titled, How an Aries Digital Trust Ecosystem Help Distribute $60 Million and Counting to Essential Workers. So, definitely join us for that in two weeks. Thank you so much to everyone who gave working group status updates today. And thank you again so much to Colton for your presentation. And I just dropped a link in chat of the project on GitHub. It is under the Hyperledger project and it is open source and public. Perfect. Great. I will add that to the meeting page for those who are watching the recording. So, thank you so much Colton and thank you to everyone else who joined the call today. Thanks everybody. Thanks. Bye. Thank you.